The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 10 Issue 3

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: LSB based Image Steganography by using the Fast Marching Method

Abstract: This paper presents a novel approach for image steganography based on the Least Significant Bit (LSB) method. Most traditional LSB methods choose the initial embedding location of the cover image randomly, and the secret messages are embedded sequentially without considering the image pixels’ values and positions. Our approach utilizes the user-selected seeds in the cover image to avoid the smooth/flat areas where cause a higher detection rate. Then the fast marching method is used to calculate T (the time of arrival of the front of the seeds) and propagate the seeds by computational dynamics. The front propagation process decides the embedding positions of the secret messages. The same algorithm can be used to retrieve the hidden information as well. The coordinates of the seeds are used as the shared key only known to the sender and receiver to add additional security protection. Peak Signal to Noise Ratio (PSNR) is evaluated to measure the quality of resulting images. The experiments show that the proposed approach generates results with high payload capacity and satisfied imperceptibility.

Author 1: Xiaoli Huan
Author 2: Hong Zhou
Author 3: Jiling Zhong

Keywords: Image steganography; LSB; the fast marching method; coordinates; PSNR

PDF

Paper 2: Developing Deep Learning Models to Simulate Human Declarative Episodic Memory Storage

Abstract: Human like visual and auditory sensory devices became very popular in recent years through the work of deep learning models that incorporate aspects of brain processing such as edge and line detectors found in the visual cortex. However, very little work has been done on the human memory, and thus our aim is to model human long-term declarative episodic memory storage using deep learning methods. An innovative way of deep neural network was created on supervised feature learning dataset such as MNIST to achieve high accuracy as well as storing the models hidden layers for future extraction. Convolutional Neural Network (CNN) learning models with transfer learning models were trained to imitate the long-term declarative episodic memory storage of human. A Recurrent Neural Network (RNN) in the form of Long Short Term Memory (LSTM) model was assembled in layers and then trained and evaluated. A Variational Autoencoder was also used for training and evaluation to mimic the human memory model. Frameworks were constructed using TensorFlow for training and testing the deep learning models.

Author 1: Abu Kamruzzaman
Author 2: Charles C. Tappert

Keywords: Convolutional neural network; long short term memory; Variational Autoencoder; deep learning; memory model; machine learning

PDF

Paper 3: Automatic Control of Colonoscope Movement for Modern Colonoscopy

Abstract: The paper presents the mathematical realization of the trajectory that the colonoscope should have in the medical intervention, as well as the mathematical demonstration of the functions that make up the colonoscope. The goal of this work is finding a method for reducing the medical doctor's effort, by using intelligent control of colonoscope movement for improving the comfort of the patient subjected to a classical colonoscopy and reducing the risk of perforation of the colon. Finally, some experimental results are presented, validating the model and the control solutions adopted in the paper.

Author 1: Helga Silaghi
Author 2: Viorica Spoiala
Author 3: Alexandru Marius Silaghi
Author 4: Tiberia Ioana Ilias
Author 5: Cornelia Gyorödi
Author 6: Claudiu Costea
Author 7: Sanda Dale
Author 8: Ovidiu Cristian Fratila

Keywords: Intelligent control; colonoscope movement; classical colonoscopy; mathematical model

PDF

Paper 4: Performance Analysis of Open Source Solution "ntop" for Active and Passive Packet Analysis Relating to Application and Transport Layer

Abstract: A key issue facing operators around the globe is the most appropriate way to deal with spotting black in networks. For this purpose, the technique of passive network monitoring is very appropriate; this can be utilized to deal with incisive problems within individual network devices, problems relating to the whole LAN (Local Area Network) or core network. This technique, however, is not just relevant for troubleshooting, but it can also be castoff for crafting network statistics and analyzing network enactment. In real time network scenarios, a lot of applications and/or processes simultaneously download and upload data. Sometimes, it is very difficult to keep track of all the uploaded and downloaded data. Wireshark is a tool that is normally used to track packets for analysis between two particular hosts during two particular sessions on the same network. However, Wireshark as some limitations such as it is not a good tool for keeping track of bulky network data transferred among various endpoints. On the other side, an open source solution "ntop" offers active as well as passive packet analysis which can be handy for system administrators, networkers and IT managers. Additionally, with ntop VoIP traffic can also be monitored. In this research work, the ntop solution has been deployed to a network facility and performance analysis of ntop solution for various application processes (on application layer) such as HTTP, SSDP (based on HTTPU) against their associated protocols such as TCP/IP, UDP, and VoIP have been analyzed. Additionally, above said processes and protocols have been comprehensively analyzed relating with their client/server breakdown, duration of the connection, actual throughput, total bytes (bytes received and sent) and total bandwidth consumed. This study has been helpful to see the weakest and strongest areas of a particular network in terms of analyzing and deploying network policies. This research work will help the research community to deploy ntop solution for real-time monitoring actively and passively.

Author 1: Sirajuddin Qureshi
Author 2: Dr Gordhan Das
Author 3: Saima Tunio
Author 4: Faheem Ullah
Author 5: Ahsan Nazir
Author 6: Ahsan Wajahat

Keywords: ntop; network monitoring; packet analysis; the application layer; transport layer

PDF

Paper 5: Achieving Flatness: Honeywords Generation Method for Passwords based on user behaviours

Abstract: Honeywords (decoy passwords) have been proposed to detect attacks against hashed password databases. For each user account, the original password is stored with many honeywords in order to thwart any adversary. The honeywords are selected deliberately such that a cyber-attacker who steals a file of hashed passwords cannot be sure, if it is the real password or a honeyword for any account. Moreover, entering with a honeyword to login will trigger an alarm notifying the administrator about a password file breach. At the expense of increasing the storage requirement by 24 times, the authors introduce a simple and effective solution to the detection of password file disclosure events. In this study, we scrutinise the honeyword system and highlight possible weak points. Also, we suggest an alternative approach that selects the honeywords from existing user information, a generic password list, dictionary attack, and by shuffling the characters. Four sets of honeywords are added to the system that resembles the real passwords, thereby achieving an extremely flat honeywords generation method. To measure the human behaviours in relation to trying to crack the password, a testbed engaged with by 820 people was created to determine the appropriate words for the traditional and proposed methods. The results show that under the new method it is harder to obtain any indication of the real password (high flatness) when compared with traditional approaches and the probability of choosing the real password is 1/k, where k = number of honeywords plus the real password.

Author 1: Omar Z Akif
Author 2: Ann F. Sabeeh
Author 3: G. J. Rodgers
Author 4: H. S. Al-Raweshidy

Keywords: Honeywords; user behaviours; worst password list; dictionary attack

PDF

Paper 6: Robust Recurrent Cerebellar Model Articulation Controller for Non-Linear MIMO Systems

Abstract: This research proposes a robust recurrent cerebellar model articulation control system (RRCMACS) for MIMO non-linear systems to achieve the robustness of the system during operation. In this system, the superior properties of the recurrent cerebellar model articulation controller (RCMAC) are incorporated to imitate an ideal sliding mode controller. The robust controller efficiently attenuates the effects of uncertainties, external disturbances, and noises to maintain the robustness of the system. The parameters of the controller were updated in the sense of the Lyapunov-like Lemma theory. Therefore, the stability and robustness of the system were guaranteed. The simulation results for the micro-motion stage system are given to prove the effectiveness and applicability of the proposed control system for model-free non-linear systems.

Author 1: Xuan-Kien Dang
Author 2: Van-Phuong Ta

Keywords: Robust controller; MIMO non-linear systems; linear piezoelectric motor (LPM); recurrent network; Cerebellar model articulation controller

PDF

Paper 7: Empirical Assessment of Ensemble based Approaches to Classify Imbalanced Data in Binary Classification

Abstract: Classifying imbalanced data with traditional classifiers is a huge challenge now-a-days. Imbalance data is a situation wherein the ratio of data within classes is not same. Many real life situations deal with such problems e.g. Web spam detection, Credit card frauds, and Fraudulent telephone calls. The problem exists everywhere when our objective is to identify exceptional cases. The problem is handled by researchers either by modifying the existing classifications methods or by developing new methods. This paper review ensemble based approaches (Boosting and Bagging based) designed to address imbalance in classes by focusing on binary classification. We compared 6 Boosting based, 7 Bagging based and 2 hybrid ensembles for their performance in imbalance domain. We use KEEL tool to evaluate the performance of these methods by implementing the methods on seven imbalance data having class imbalance ratio from 1.82 to as high as 129.44. Area Under the curve (AUC) parameter is recorded as the performance metric. We also statistically analyzed the methods using Friedman rank test and Wilcoxon Matched Pair signed rank test to strengthen the visual interpretations. After analysis, it is proved that RusBoost ensemble outperformed every other ensemble in the imbalanced data situations.

Author 1: Prabhjot Kaur
Author 2: Anjana Gosain

Keywords: Ensemble approaches; boosting; bagging; hybrid ensembles; imbalanced data-sets; classification

PDF

Paper 8: Applying Diffie-Hellman Algorithm to Solve the Key Agreement Problem in Mobile Blockchain-based Sensing Applications

Abstract: Mobile blockchain has achieved huge success with the integration of edge computing services. This concept, when applied in mobile crowd sensing, enables transfer of sensor data from blockchain clients to edge nodes. Edge nodes perform proof-of-work on sensor data from blockchain clients and append validated data to the chain. With this approach, blockchain can be performed pervasively. However, securing sensitive sensor data in a mobile blockchain (client/edge node architecture) becomes imperative. To this end, this paper proposes an integrated framework for mobile blockchain which ensures key agreement between clients and edge nodes using Elliptic Curve Diffie-Hellman algorithm. Also, the framework provides efficient encryption of sensor data using the Advanced Encryption Standard algorithm. Finally, key agreement processes in the framework were analyzed and results show that key pairing between the blockchain client and the edge node is a non-trivial process.

Author 1: Nsikak Pius Owoh
Author 2: Manmeet Mahinderjit Singh

Keywords: Internet of Things; mobile crowd sensing; edge computing; sensor data encryption; mining; smart contract

PDF

Paper 9: Optimizing the Hyperparameter of Feature Extraction and Machine Learning Classification Algorithms

Abstract: The process of assigning a quantitative value to a piece of text expressing a mood or effect is called Sentiment analysis. Comparison of several machine learning, feature extraction approaches, and parameter optimization was done to achieve the best accuracy. This paper proposes an approach to extracting comparison value of sentiment review using three features extraction: Word2vec, Doc2vec, Terms Frequency-Inverse Document Frequency (TF-IDF) with machine learning classification algorithms, such as Support Vector Machine (SVM), Naive Bayes and Decision Tree. Grid search algorithm is used to optimize the feature extraction and classifier parameter. The performance of these classification algorithms is evaluated based on accuracy. The approach that is used in this research succeeded to increase the classification accuracy for all feature extractions and classifiers using grid search hyperparameter optimization on varied pre-processed data.

Author 1: Sani Muhammad Isa
Author 2: Rizaldi Suwandi
Author 3: Yosefina Pricilia Andrean

Keywords: Sentiment analysis; word2vec; TF-IDF (terms frequency-inverse document frequency); Doc2vec; grid search

PDF

Paper 10: Stabilizing Average Queue Length in Active Queue Management Method

Abstract: This paper proposes the Stabilized (DGRED) method for congestion detection at the router buffer. This method aims to stabilize the average queue length between allocated minthre_shold and doublemaxthre_shold positions to increase the network performance. The SDGRED method is simulated and compared with Gentle Random Early Detection (GRED) and Dynamic GRED active queue management methods. This comparison is built on different important measures, such as dropping probability, throughput, average delay, packet loss, and mean queue length for packets. The evaluation aims to identify which method presents better simulation performance measurement results when non-congestion or congestion situations occur at the router buffers in congestion control. The results show that at high packet arrival probability, the proposed algorithm helps provide lesser queue length values, delayed time, and packet loss compared with current methods. Furthermore, SDGRED generates adequate throughput at high packet arrival probability.

Author 1: Mahmoud Baklizi

Keywords: Congestion control methods; GRED; dynamic GRED; random; simulation; active queue management method

PDF

Paper 11: Survey on Human Activity Recognition based on Acceleration Data

Abstract: Human activity recognition is an important area of machine learning research as it has many utilization in different areas such as sports training, security, entertainment, ambient-assisted living, and health monitoring and management. Studying human activity recognition shows that researchers are interested mostly in the daily activities of the human. Therefore, the general architecture of HAR system is presented in this paper, along with the description of its main components. The state of the art in human activity recognition based on accelerometer is surveyed. According to this survey, Most of the researches recently used deep learning for recognizing HAR, but they focused on CNN even though there are other deep learning types achieved a satisfied accuracy. The paper displays a two-level taxonomy in accordance with machine learning approach (either traditional or deep learning) and the processing mode (either online or offline). Forty eight studies are compared in terms of recognition accuracy, classifier, activities types, and used devices. Finally, the paper concludes different challenges and issues online versus offline also using deep learning versus traditional machine learning for human activity recognition based on accelerometer sensors.

Author 1: Salwa O. Slim
Author 2: Ayman Atia
Author 3: Marwa M.A. Elfattah
Author 4: Mostafa-Sami M.Mostafa

Keywords: Human activity recognition; accelerometer; online system; offline system; traditional machine learning; deep learning

PDF

Paper 12: A Systematic Review of Domains, Techniques, Delivery Modes and Validation Methods for Intelligent Tutoring Systems

Abstract: An Intelligent Tutoring System (ITS) is a computer software that help students in learning educational or academics concepts in customized environment. ITSs are instructional systems that have capability to facilitate user by providing instantaneous feedback and instructions without any human intervention. The advancement of new technologies has integrated computer based learning with artificial intelligence methods with aim to develop better custom-made education systems that referred as ITS. One of the important factors that affect students learning process is self-learning; all students cannot have similar experience of learning scholastic concepts from same educational material. Because students have individual differences that make some topics difficult or easy to understand regarding taken subjects. These systems have capability to improve teaching and learning process in different educational domains while respecting individual learning needs. In this study an attempt is made to review the research in field of ITSs and highlight the educational areas or domains in which ITSs have been introduced. Techniques, delivering modes and evaluation methodologies that have been used in developed ITSs have also been discussed in this work. This work will be helpful for both academia and new comers in the field of ITSs to further strengthen basis of tutoring systems in educational domains.

Author 1: Aized Amin Soofi
Author 2: Moiz Uddin Ahmed

Keywords: ITS; intelligent tutoring system; intelligent learning; adaptive learning; intelligent tutoring; ITS review

PDF

Paper 13: Personalized Recommender by Exploiting Domain based Expert for Enhancing Collaborative Filtering Algorithm :PReC

Abstract: The large amount of information available on the internet initiated various Recommender algorithms to act as an intermediate between number of choices and internet users. Collaborative filtering is one of the most traditional and intensively used recommendation approaches for many commercial services. Despite providing satisfying outcomes, it does have some issues that include source diversity, reliability, sparsity of data, scalability and cold start. Thus, there is a need for further improvement in the current generation of recommender system to achieve a more effective human decision support in a wide variety of applications and scenarios. Personalized Expert based collaborative filtering (PReC) approach is proposed to identify domain specific experts and the use of experts preference enhanced the performance of collaborative filtering recommender systems. A unified framework is proposed that integrates similar users rating data, experts rating and demographic data to reduce the number of pairwise computations from the search space to ensure scalability and enabled fine grained recommendations. The proposed method is evaluated using accuracy metrics MAE, RMSE on the data set collected from MovieLens datasets.

Author 1: Mrs.M Sridevi
Author 2: Dr.R.Rajeswara Rao

Keywords: Recommender system; collaborative filtering; domain based experts; demographic data

PDF

Paper 14: Model Reference Adaptive Control Design for Nonlinear Plants

Abstract: In this paper, the basic theory of the model reference adaptive control design and issues of particular relevance to control nonlinear dynamic plants with a relative degree greater than or equal to one with unknown parameters are detailed. The studied analysis was motivated through its application to a robot manipulator with six degrees of freedom. After linearization using the input-output feedback linearization and decoupling algorithm, the nonlinear Multi-input Multi-output system was transformed into six independent single-input single-output linear subsystems each one has a relative degree equal to two, the obtained results in different simulations shows that the augmented reference model adaptive controller has been successfully implemented.

Author 1: Wafa Ghozlane
Author 2: Jilani Knani

Keywords: Model reference adaptive control; nonlinear dynamic plants; relative degree; unknown parameters; robot manipulator; input-output feedback linearization

PDF

Paper 15: Diagnosis of Parkinson’s Disease based on Wavelet Transform and Mel Frequency Cepstral Coefficients

Abstract: The aim of this study presented in this paper is to determine the choice of the appropriate wavelet analyzer with the method of extraction of MFCC coefficients for an assistance in the diagnosis of Parkinson's disease. The analysis used is based on a database of 18 healthy and 20 Parkinsonian patients. The suggested processing is based on the transformation of the speech signal by the wavelet transform through testing several sorts of wavelets, extracting Mel Frequency Cepstral Coefficients (MFCC) from the signals, and we apply the support vector machine (SVM) as classifier. The test results reveal that the best recognition rate, which is 86.84%, is obtained by the wavelets of level 2 at 3rd scale (Daubechie, Symlet, ReverseBior or BiorSpline) combination-MFCC–SVM.

Author 1: Taoufiq BELHOUSSINE DRISSI
Author 2: Soumaya ZAYRIT
Author 3: Benayad NSIRI
Author 4: Abdelkrim AMMOUMMOU

Keywords: Parkinson disease; discrete wavelet transform; MFCC; Support Vector Machine (SVM)

PDF

Paper 16: Risk Factors for Software Requirements Change Implementation

Abstract: Requirements change has been regarded as a substantial risk in software development projects. The factors that contribute to the risk are identified through impact analysis, which later determine the planning of the change implementation. The analysis is however not straightforward as the risk factors that constitute requirements change implementation is currently not much explored. This paper identifies the risk factors by firstly collating them qualitatively through a review of related work and a focus group study. The factors are then confirmed quantitatively through a survey in which data is analysed by using Partial Least Squares Structural Equation Modelling (PLS-SEM). The survey comprise of 276 practitioners from software industry who are involved in the impact analysis. The results indicate that User, Project Team, Top Management, Third Party, Organisation, Identification of Change, Existing Product and Planning of Change Implementation are the significant risk factors in planning of requirements change implementation.

Author 1: Marfizah A. Rahman
Author 2: Rozilawati Razali
Author 3: Fatin Filzahti Ismail

Keywords: Requirements change; risk factor; structural equation modeling

PDF

Paper 17: Managing and Reducing Handoffs Latency in Wireless Local Area Networks using Multi-Channel Virtual Access Points

Abstract: The time is era of computer technology and relevant hybrid disciplines to emerge as a multi impact entity in the technological world. In the same stream where user of technology is increasing the expectation from the technology are also expedite, thus users need to have high speed network to support high speed devices, especially in the technical world where palm computers hit the market at its excel. High data transfer rate should be enough supportive to the environment of next generation wireless networks. Mobility is another added factor to high speed connectivity issues. Users for enormous application would like to use a network which is heterogeneous in nature, such as high availability and high bandwidth to avoid issues in real time applications, video streaming and including VoIP and multi-media over the mobile networks. Thus mobile communication access massively relies on the continuous network availability which is done through handoffs, ensures the seamless transfer of device from one AP to another. In this paper, I have extracted the novel approach of multi-channel virtual access points which will nullify or reduce the handoffs latency.

Author 1: Kamran Javed
Author 2: Fowad Talib
Author 3: Mubeen Iqbal
Author 4: Asif Hussain Khan

Keywords: Technology; mobility; heterogeneous; bandwidth; video streaming; mobile network; VoIP; handoffs

PDF

Paper 18: An Enhancement on Mobile Social Network using Social Link Prediction with Improved Human Trajectory Internet Data Mining

Abstract: Generally, the mobile social network has missing and unauthentic links. The prediction of those links is one of the major problems to understand the relationship between two nodes and recommends the potential links to the users derived from the history of user-link interactions and their contextual information. The recommendation problem can be modeled as prediction of the future links between users. Many research works have been developed to understand the relationship between the nodes and construct the models for missing or suspicious links prediction. Among those, Improved Multi-Context Trajectory Embedding Model with Service Usage Classification Model (IMC-TEM-SUCM) has better enhancement on human trajectory data mining by classifying the internet traffic. However, this method requires the prediction of the relationship between the nodes and social links. Hence in this article, the IMC-TEM-SUCM is proposed with the Social Link Prediction (SLP) mechanism for identifying the relationship between two nodes and predicting the stable links. In this technique, a number of nodal features are considered and their influence on the link prediction problem of Foursquare and Gowalla are examined. This extended network is used for computing two features such as optimism and reputation that depict node’s characteristics in a signed network. After that, meta-path-based features are considered and their influence of the route length on the problem of link prediction is examined. Moreover, a link prediction process is performed by using the machine learning classification algorithms that use the extracted node-based and meta-path-based features. Also, Cosine coefficient and Jaccard coefficient similarity-based techniques are used for computing the similarity index between any two nodes. A higher similarity indicates a higher chance of forming links between them. Finally, the performance effectiveness of the proposed model is evaluated through the experimental results using different real-world datasets.

Author 1: B. Suryakumar
Author 2: Dr. E. Ramadevi

Keywords: Mobile social network; improved multi-context trajectory embedding model with service usage classification model; social link prediction; machine learning; cosine coefficient; Jaccard coefficient

PDF

Paper 19: Using FDD for Small Project: An Empirical Case Study

Abstract: Empirical analysis evaluates the proposed system via practical experience and reveals its pros and cons. Such type of evaluation is one of the widely used validation approach in software engineering. Conventional software process models were performed well till mid 1990s but then gradually were replaced by agile methodologies. This happened due to the various features, the agile family offered, which the conventional models failed to provide. However besides the advantages, agile models lacked at some areas as well. To get the extreme benefits from any agile model, it is necessary to eliminate the weaknesses of that model by customizing its development structure. Feature Driven Development (FDD) is one of the widely used agile models in software industry particularly for large scale projects. This model has been criticized by many researchers due to its weaknesses such as explicit dependency on experienced staff, little or no guidance for requirement gathering, rigid nature to accommodate requirement changes and heavy development structure. All these weaknesses make the FDD model suitable only for large scale projects where the requirements are less likely to change. This paper deals with the empirical evaluation of FDD during the development of a small scale web project so that the areas and practices of this model can be identified with empirical proof, which make this model suitable only for large projects. For effective evaluation, the results of FDD case study are compared with a published case study of Extreme Programing (XP), which is widely used for small scale projects.

Author 1: Shabib Aftab
Author 2: Zahid Nawaz
Author 3: Faiza Anwer
Author 4: Munir Ahmad
Author 5: Ahmed Iqbal
Author 6: Ashfaq Ahmad Jan
Author 7: Muhammad Salman Bashir

Keywords: Agile models; feature driven development; FDD; empirical evaluation; comparative analysis

PDF

Paper 20: Opinion Mining: An Approach to Feature Engineering

Abstract: Sentiment Analysis or opinion mining refers to a process of identifying and categorizing the subjective information in source materials using natural language processing (NLP), text analytics and statistical linguistics. The main purpose of opinion mining is to determine the writer’s attitude towards a particular topic under discussion. This is done by identifying a polarity of a particular text paragraph using different feature sets. Feature engineering in pre-processing phase plays a vital role in improving the performance of a classifier. In this paper we empirically evaluated various features weighting mechanisms against the well-established classification techniques for opinion mining, i.e. Naive Bayes-Multinomial for binary polarity cases and SVM-LIN for multiclass cases. In order to evaluates these classification techniques we use Rotten Tomatoes publically available movie reviews dataset for training the classifiers as this is widely used dataset by research community for the same purpose. The empirical experiment concludes that the feature set containing noun, verb, adverb and adjective lemmas with feature-frequency (FF) function perform better among all other feature settings with 84% and 85% correctly classified test instances for Naïve Bayes and SVM, respectively.

Author 1: Shafaq Siddiqui
Author 2: M. Abdul Rehman
Author 3: Sher M. Daudpota
Author 4: Ahmad Waqas

Keywords: Opinion mining; feature engineering; machine learning; classification; natural language processing

PDF

Paper 21: Cloud Server Security using Bio-Cryptography

Abstract: Data security is becoming more important in cloud computing. Biometrics is a computerized method of identifying a person based on a physiological characteristic. Among the features measured are our face, fingerprints, hand geometry, DNA etc. Biometric can fortify to store the cloud server using bio-cryptography. The Bio-cryptography key is used to secure the scrambled data in the cloud environment. The Bio-cryptography technique uses fingerprint, voice or iris as a key factor to secure the data encryption and decryption in the cloud server. In this paper, the security of the biometric system through cloud computing is discussed along with improvement regarding its performance to avoid the criminal to access the data. Biometric is a genuine feature for the cloud provider. Cryptography algorithm will be explained using blockchain technology to overcome security issues. The blockchain technology will provide more protection through cryptographic keys to secure biometric data.

Author 1: Zarnab Khalid
Author 2: Muhammad Rizwan
Author 3: Aysha Shabbir
Author 4: Maryam Shabbir
Author 5: Fahad Ahmad
Author 6: Jaweria Manzoor

Keywords: Cloud computing; biometrics; fingerprints; encryption and decryption methods; cryptography keys; bio-cryptography; blockchain

PDF

Paper 22: Enhanced Physical Document Management using NFC with Verification for Security and Privacy

Abstract: This study focuses on implementation of physical document management for an organization using Near-Field Communication (NFC) since it provides faster detection on tracking items based on location. Current physical document management operates using bar-codes. However, barcodes are able to be duplicated, which make it not secure that lead to forgery and unauthorized modification. Therefore, the purpose of the proposed physical document management system is to produce a better administration control in an organization through the use of verification mechanism. Nonetheless, the current NFC based system is lack of verification process. Thus, an enhancement in physical document management with verification process is proposed and self-developed system is built using C#, SQLite, Visual Studio, NFC tag and NFC reader (ACR122U-A9). Moreover, the new system required employee to log in to the system by scanning ID tag and followed by the physical document File tag, which both tags scanned at the NFC reader. Then, information of the physical file coordinator and the status of the location of the physical document file is displayed. The significant of this study is to protect confidential document and improve administrative control through dual verification; and produce a database to monitor the real-time data detection.

Author 1: Z. Zainal Abidin
Author 2: N.A. Zakaria
Author 3: Z. Abal Abas
Author 4: A.A. Anuar
Author 5: N. Harum
Author 6: M.R. Baharon
Author 7: Z. Ayop

Keywords: Document file management system; physical document files detection; near-field communication (NFC)

PDF

Paper 23: A Machine Learning Approach for Predicting Nicotine Dependence

Abstract: An examination of the ability of machine learning methodologies in classifying women Waterpipe (WP) smoker’s level of nicotine dependence is proposed in this work. In this study, we developed a classifier that predicts the level of nicotine dependence for WP tobacco female smokers using a set of novel features relevant to smokers including age, residency, and educational level. The evaluation results show that our approach achieves a recall of 82% when applied on a dataset of female WP smokers in Jordan.

Author 1: Mohammad Kharabsheh
Author 2: Omar Meqdadi
Author 3: Mohammad Alabed
Author 4: Sreenivas Veeranki
Author 5: Ahmad Abbadi
Author 6: Sukaina Alzyoud

Keywords: Machine learning; nicotine dependency; Women; Waterpipe; classification

PDF

Paper 24: The Opportunities and the Limitations of Using the Independent Post-Editor Technology in Translation Education

Abstract: A new mechanical function known as post-editing, which helps to correct the imperfections of raw machine translation output, is introduced in the translation market. While this function is commonly used as an integral part of the machine translation, it can also be used for correcting non-translated texts on its own. The main purpose of this study is to find an answer to the question of which contributions could be made to the competences of translation students by using independent post-editors during academic translation education. A model course application and survey are conducted with the participation of students from translation studies departments. The interrater reliability is used in this study. In accordance with the results, it is found that most of the students have not known the usage of the independent post-editors during the translation process. The research provides new insights into the contributions of post-editor technology to the translation education. The findings of the research also reflect the contributions of post-editor technology to the translation quality in terms of the speed-time management, the accuracy of punctuations, abbreviations, and grammar rules. It is also determined that the post-editors contribute to the competencies of the translation students. As a result, it is suggested that the post-editors may be used as an educational material in translation education. Indeed, the results require other studies about the usage of post-editors as educational materials.

Author 1: Burcu TÜRKMEN
Author 2: Muhammed Zahit CAN

Keywords: Post-editors; machine translation; translation education; computer-assisted translation

PDF

Paper 25: On Telemedicine Implementations in Ghana

Abstract: Most Sub-Saharan Africa countries including Ghana experience a shortage of medical professionals, especially in the rural areas. This is mainly caused by the low-intake of students into medical schools due to inadequacy of facilities to train students. Also, a number of medical students graduate and emigrate to foreign countries to seek new opportunities and enhanced living standards. To reduce the effect of this, telemedicine is being implemented in certain areas to provide healthcare. Much as advances are being made in information and communication technologies, the advancement of telemedicine in developing countries still needs to upgraded and extended to cover more areas. Some categories of telemedicine have little to no implementation in Ghana due to lack of resources, little government support as well as the absence of structured frameworks and policies to ensure their implementation. This paper seeks to present telemedicine applications and implementations in Ghana till date as well as suggest some recommendations to mitigate some of the challenges impeding the advancement of telemedicine.

Author 1: E. T. Tchao
Author 2: Isaac Acquah
Author 3: S. D. Kotey
Author 4: C. S. Aggor
Author 5: J. J. Kponyo

Keywords: Telemedicine; Ghana; m-health; store-and-forward; information and communication technology

PDF

Paper 26: V-ITS: Video-based Intelligent Transportation System for Monitoring Vehicle Illegal Activities

Abstract: Vehicle monitoring is a challenging task for video-based intelligent transportation system (V-ITS). Nowadays, the V-ITS system has a significant socioeconomic impact on the development of smart cities and always demand to monitor different traffic parameters. It noticed that traffic accidents are exceeded throughout the world with the percentage of 1.7%. The increase in accidents and the percentage of deaths are due to the people that don’t abide by the traffic rules. To address these challenges, an improved V-ITS system is developed in this paper to detect and track vehicles and driver’s activities during highway driving. This improved V-ITS system is capable to do automatic traffic management that saves traffic accidents. It provides the feature of a real-time detection algorithm for driver immediate line overrun, speed limit overrun and yellow-line driving. To develop this V-ITS system, a pre-trained convolutional neural network (CNN) model with 4-layer architecture was developed and then deep-belief network (DBN) model was utilized to recognize illegal activities. To implement V-ITS system, OpenCV and python tools are mainly utilized. The GRAM-RTM online free data sets were used to test the performance of V-ITS system. The overall significance of this intelligent V-ITS system is comparable to other state-of-the-art systems. The real-time experimental results indicate that the V-ITS system can be used to reduce the number of accidents and ensure the safety of passengers as well as pedestrians.

Author 1: Qaisar Abbas

Keywords: Computer vision; intelligent traffic management system; traffic monitoring; vehicle tracking from video; image processing; deep learning

PDF

Paper 27: Analysis of Doppler Effects in Underwater Acoustic Channels using Parabolic Expansion Modeling

Abstract: Underwater communication systems play an important role in understanding various phenomena that take place within our vast oceans. They can be used as an integral tool in countless applications ranging from environmental monitoring to gathering of oceanographic data, marine archaeology, and search and rescue missions. Acoustic Communication is the viable solution for communication in highly attenuating underwater environment. However, these systems pose a number of challenges for reliable data transmission. Nonnegligible Doppler Effect emerges as a major factor. In order to support reliable high data rate communication, understanding the channel behavior is required. As sea trials are expensive, simulators are required to study the channel behavior. Modeling this channel involves solution to wave equations and validation with experimental data for that portion of the sea. Parabolic expansion model is a wave theory based acoustic channel model. This model applies Pade coefficients and Fourier coefficients as expansion functions to solve the wave equations. This work attempts to characterize the impact of Doppler Effect in the underwater acoustic channel using parabolic expansion models.

Author 1: Ranjani G
Author 2: Sadashivappa G

Keywords: Doppler effect; underwater communication; acoustic channel models; parabolic expansion

PDF

Paper 28: Automated Grading Systems for Programming Assignments: A Literature Review

Abstract: Automated grading for programming assignments is becoming more and more important nowadays especially with the emergence of the Massive Open Online Courses. Many techniques and systems are being used nowadays for automated grading in educational institutions. This article provides a literature review of the many automated grading systems and techniques that are being used currently. It focuses on highlighting the differences between these systems and techniques and addressing issues, advantages and disadvantages. The review shows that these systems have limitations due to difficulty in usage by students as noticed by some course instructors. Some of these problems stem from UI/UX difficulties while other problems were due to beginner syntax errors and language barriers. Finally, it shows the need to fill the gap by building new systems that are friendlier towards beginner programmers, has better localization and easier user experience.

Author 1: Hussam Aldriye
Author 2: Asma Alkhalaf
Author 3: Muath Alkhalaf

Keywords: Automated grading

PDF

Paper 29: Supply Chain Modeling and Simulation using SIMAN ARENA a Case Study

Abstract: The control of supply chains passes often by identification of various constraints and optimization of the different links and parameters associated to the functioning of the supply chain. To attain these goals, it is vital to get the best knowing and understanding of the supply chain diversity and complexity also to anticipate its behavior, which requires, a pertinent modeling that will offer the necessary information to evaluate the supply chain performance. The present paper focuses on modeling and simulation of a case study of a supply chain using SIMAN ARENA Rockwell software, mainly transport and different operations in this chain. The purpose is creating the simulation models and how to use in a case study to diagnose and master the operation and functioning of this supply chain. The objective is creating simulation models to determine the performance of the supply chain by calculating the transportation time in each travel, number of travels, number of transported fertilizers and sulfur wagons and unloaded acid talks and finally the waiting times in train station, in order to optimize this performance indicators.

Author 1: Azougagh Yassine
Author 2: Benhida Khalid
Author 3: Elfezazi Said

Keywords: Supply Chain; transport; simulation; modelling

PDF

Paper 30: An Agglomerative Hierarchical Clustering with Association Rules for Discovering Climate Change Patterns

Abstract: Ozone analysis is the process of identifying meaningful patterns that would facilitate the prediction of future trends. One of the common techniques that have been used for ozone analysis is the clustering technique. Clustering is one of the popular methods which contribute a significant knowledge for time series data mining by aggregating similar data in specific groups. However, identifying significant patterns regarding the ground-level ozone is quite a challenging task especially after applying the clustering task. This paper presents a pattern discovery for ground-level ozone using a proposed method known as an Agglomerative Hierarchical Clustering with Dynamic Time Warping (DTW) as a distance measure on which the patterns have been extracted using the Apriori Association Rules (AAR) algorithm. The experiment is conducted on a Malaysian Ozone dataset collected from Putrajaya for year 2006. The experiment result shows 20 pattern influences on high ozone with a high confident (1.00). However, it can be classified into four meaningful patterns; more high temperature with low nitrogen oxide, nitrogen oxide and nitrogen dioxide high, nitrogen oxide with carbon oxide high, and carbon oxide high. These patterns help in decision making to plan the amount of carbon oxide and nitrogen oxide to be reduced in order to avoid the high ozone surface.

Author 1: Mahmoud Sammour
Author 2: Zulaiha Ali Othman
Author 3: Zurina Muda
Author 4: Roliana Ibrahim

Keywords: Hierarchical clustering; dynamic time warping; ground-level ozone; Apriori Association Rules

PDF

Paper 31: Regularization Activation Function for Extreme Learning Machine

Abstract: Extreme Learning Machine (ELM) algorithm based on single hidden layer feedforward neural networks has shown as the best time series prediction technique. Furthermore, the algorithm has a good generalization performance with extremely fast learning speed. However, ELM facing overfitting problem that can affect the model quality due to the implementation using empirical risk minimization scheme. Therefore, this study aims to improve ELM by introducing an Activation Functions Regularization in ELM called RAF-ELM. The experiment has been conducted in two phases. First, investigating the modified RAF-ELM performance using four type of activation functions which is Sigmoid, Sine, Tribas and Hardlim. In this study, input weight and bias for hidden layers are randomly selected, whereas the best neurons number of hidden layer is determined from 5 to 100. This experiment used UCI benchmark datasets. The number of neurons (99) using Sigmoid activation function shown the best performance. The proposed methods has improved the accuracy performance and learning speed up to 0.016205 MAE and processing time 0.007 seconds respectively compared with conventional ELM and has improved up to 0.0354 MSE for accuracy performance compare with state of the art algorithm. The second experiment is to validate the proposed RAF-ELM using 15 regression benchmark dataset. RAF-ELM has been compared with four neural network techniques namely conventional ELM, Back Propagation, Radial Basis Function and Elman. The results show that RAF-ELM technique obtain the best performance compared to other techniques in term of accuracy for various time series data that come from various domain.

Author 1: Noraini Ismail
Author 2: Zulaiha Ali Othman
Author 3: Noor Azah Samsudin

Keywords: Extreme learning machine; prediction; neural networks; regularization; time series

PDF

Paper 32: Energy-Aware Routing Hole Detection Algorithm in the Hierarchical Wireless Sensor Network

Abstract: To minimize the communication overhead with the help of optimal path selection in Wireless Sensor Network (WSN) routing protocols is the challenging issue. Hierarchical routing optimizes energy utilization by distributing the workload among different clusters. But many-to-one multi-hop hierarchical routing will result in the excessive expenditure of energy near the sink and leads to early energy exhaustion of the nodes. Due to this the routing hole problem can be caused around the base station. Data routed along the hole boundary nodes will lead to premature exhaustion of energy. This will maximize the size of the hole in the network. Detection of holes saves the additional energy consumption around the hole and minimize the hole size. In this paper a novel energy efficient routing hole detection (EEHD) algorithm is presented, on the detection of routing hole, the periodic re-clustering is performed to avoid the long detour path. Extensive simulations are done in MATLAB, the results reveal that EEHD has better performance than other conventional routing hole detection techniques, such as BCP and BDCIS.

Author 1: Najm Us Sama
Author 2: Kartinah Bt Zen
Author 3: Atiq Ur Rahman
Author 4: Aziz Ud Din

Keywords: Wireless sensor network; routing protocol; hierarchical routing; routing hole problem; routing hole detection

PDF

Paper 33: Microsatellite’s Detection using the S -Transform Analysis based on the Synthetic and Experimental Coding

Abstract: Microsatellite in genomic DNA sequence, or Short tandem repeat (STR). It is a class of tandem repeat that have repeated pattern with size of 2- 6 base-pairs adjacent to each other. The detection of the specific tandem repeat is an important part of genetic diseases identification and it is also used in DNA fingerprinting and in evolutionary studies. Many tools based on string matching have been developed to detect microsatellites. However, these tools are based on prior information about repetitions in the sequence which cannot be always obtainable. For this, the signal processing techniques were suggested to overcome the limitations of the bioinformatic tools. In this paper, we use a new variant of the S-Transform which we apply to short tandem repeats signals. These signals are firstly obtained by applying different coding techniques to the DNA sequences. To further study the performance of the proposed method, we establish a comparison with different bioinformatics approaches (TRF, Mreps, Etandem) and three other methods of signal processing: The Adaptive S-Transform (AST), the Empirical Mode and Wavelet Decomposition (EMWD) and the Parametric Spectral Estimation (PSE) considering the AR model. This study indicates that our approach outperforms the earlier methods in identifying the short tandem repeat, in fact, our method detects the exact number and positions of trinucleotides present in the tested real DNA sequence.

Author 1: Soumaya Zribi
Author 2: Imen Messaoudi
Author 3: Afef Elloumi Oueslati
Author 4: Zied Lachiri

Keywords: DNA sequence; microsatellites; synthetic and experimental coding; s-transform; bioinformatic tools; Empirical Mode and Wavelet Decomposition (EMWD); Parametric Spectral Estimation (PSE)

PDF

Paper 34: Location Prediction in a Smart Environment

Abstract: The context prediction and especially the location prediction is an important feature for improving the performance of smart systems. Predicting the next location or context of the user make the system proactive, so the system will be capable to offer the suitable services to the user without his involving. In this paper, a new approach will be presented based on the combination of pattern technique and Bayesian network to predict the next location of the user. This approach was tested on real data set, our model was able to achieve 89% of the next location prediction accuracy.

Author 1: Wael Ali Alosaimi
Author 2: Ahmed Binmahfoudh
Author 3: Roobaea Alroobaea
Author 4: Atef Zaguia

Keywords: Location prediction; context; pattern; Bayesian network

PDF

Paper 35: Low-fidelity Prototype Design for Serious Game for Slow-reading Students

Abstract: Serious game is an alternative teaching aid that is getting a place of use by teachers and parents. Its widespread use has basically changed the way of life and learning of children and has a positive impact on achievement and increased the motivation of children in learning. However, not all serious game designs are suitable for slow-reading students. They are slightly different from other students in terms of cognitive potential and are struggling to meet academic demands in the class. Therefore, the main objective of this study is to produce low-fidelity prototypes involving target users as early as the design process. This study focuses on the production of storyboard contents suitable for slow-reading students to save time and cost of game model development. This study uses a child-centered design (CCD) method that involves paper prototypes, chauffeured prototype, think aloud protocols and observations. The results of this study are low-fidelity prototypes in the form of computerized storyboards that have been verified and will be used for heuristic assessments. These low-fidelity prototypes are expected to give an early look and help researchers in developing high-fidelity prototypes.

Author 1: Saffa Raihan Zainal Abidin
Author 2: Siti Fadzilah Mat Noor
Author 3: Noraidah Sahari Ashaari

Keywords: Serious game; brain-based learning; low-fidelity prototype; paper prototype; chauffeured prototype; think aloud protocol; slow-reading students

PDF

Paper 36: Experimentation for Modular Robot Simulation by Python Coding to Establish Multiple Configurations

Abstract: Most of the Modular Self-reconfigurable (MSR) robots are being developed in order to have the capability of achieving different locomotion gaits. It is an approach of robotic system which involving a group of identical robotic modules that are connecting together and are able to perform specific tasks. In this study, a 3D-printed MSR robot named Dtto–Explorer Modular Robot was used to investigate the achievable propagation that can be made based on three of Dtto robot modules. As Dtto robot is developed based on Modular Transformer (M-TRAN), Dtto robot has the number of Degree of Freedom (DOF) same as M-TRAN which is 2 DOF. Hence, this study is done with the intentions to know the variation of configuration that can be made by multiple module robots which have only 2 DOF. The robot propagation was simulated in Virtual Robot Experimentation Platform (V-REP) software. The result of the simulation shows that the Dtto MSR robot can propagate multiple configurations after all the robot modules being connected to each other by different attachment orientation of the Dtto robot and it is suitable for the purpose of further research on MSR robot architecture.

Author 1: Muhammad Haziq Hasbulah
Author 2: Fairul Azni Jafar
Author 3: Mohd. Hisham Nordin
Author 4: Kazutaka Yokota

Keywords: Dtto robot; simulation; configuration; locomotion; orientation

PDF

Paper 37: A Novel Assessment to Achieve Maximum Efficiency in Optimizing Software Failures

Abstract: Software Reliability is a specialized area of software engineering which deals with the identification of failures while developing the software. Effective analysis of the reliability helps to signify the number of failures occurred during the development phase. This in turn aid in the refinement of the failures occurred during the development of software. This paper identifies a novel assessment to detect and eliminate the actual software failures efficiently. The approach fits in an exponential log normal distribution of Generalized Gamma Mixture Model (GGMM). The approach estimates two parameters using the Maximum Likelihood Estimate (MLE). Standard Evaluation metrics like Mean Square Error (MSE), Coefficient of Determination (R2), Sum of Squares (SSE), and Root Means Square Error (RMSE) were calculated. The experimentation was carried out on five benchmark datasets which interpret the considered novel technique identifies the actual failures on par with the existing models. This novel software reliability growth model which is more effectual in the identification of the failures significantly and facilitate the present software organizations in the release of software free from bugs just in time.

Author 1: Jagadeesh Medapati
Author 2: Prof Anand Chandulal J
Author 3: Prof Rajinikanth T V

Keywords: Software reliability; failure rate; reviews; software cost; optimization

PDF

Paper 38: Process Capability Indices under Non-Normality Conditions using Johnson Systems

Abstract: Process capability indices (PCIs) quantify the ability of a process to produce on target and within specifications performances. Basic indices designed for normal processes gives flawed results for non-normal process. Numerous methods have been proposed for non-normal processes to estimate PCIs in which some of them are based on transformation methods. The Johnson system comprising three types that translate a continuous non-normal distribution to normal. The aim of this paper is to estimate four basic indices for non-normal process using Johnson system with single straightforward procedure. The efficacy of the proposed approach can be assessed for all three Johnson Curves (SB, SU, SL) but result for SU is presented in this paper. PCIs for a data set are estimated and percentiles are obtained by our proposed exact method based on selected Johnson density function which was earlier based on approximate methods without any prior knowledge of density function of non-normal process. We compare our results with other existing methods to estimate PCIs for non-normal process. From statistical analysis we have noted that this modification improve process capability indices.

Author 1: Suboohi Safdar
Author 2: Dr. Ejaz Ahmed
Author 3: Dr. Tahseen Ahmed Jilani
Author 4: Dr. Arfa Maqsood

Keywords: Johnson curve; percentiles; simulation; exact method

PDF

Paper 39: Application of Artificial Neural Network and Information Gain in Building Case-Based Reasoning for Telemarketing Prediction

Abstract: Traditionally, case-based reasoning (CBR) has been used as advanced technique for representing expert knowledge and reasoning. However, for stochastic business data such as customers’ behavior and users’ preferences, the knowledge cannot be extracted directly from data to build the cases in reasoning in making prediction. Artificial Neural Network that is known to be able to build model for predicting unprecedented business data is used together with Shannon Entropy and Information Gain (IG) to identify the key features. 8 attributes have been identified as key features from the 17 attributes which are based on the telemarketing data. These attributes are used to select the key features in building CBR. The weightage for the key features in the cases is obtained from the IG values. The mechanism of creating the cases based on the input from the ANN is discussed and the integration process between ANN and CBR is given. The process of integrating the ANN and CBR shows that both techniques complement each other in building a model in predicting a customer who would subscribe one of the promoted new banking service called “term deposit”.

Author 1: S.M.F.D Syed Mustapha
Author 2: Abdulmajeed Alsufyani

Keywords: Artificial neural network; prediction model; telemarketing; shannon entropy; feature selection; case-based reasoning

PDF

Paper 40: A Survey on Opportunistic Routing

Abstract: Opportunistic Routing (OR) is attracted much in the research field of multi-hop wireless networks because it is different from traditional routing protocols [such as: Distance Vector (DV) and Link State (LS)], that it needs a lightweight protocol which is strong source routing basis]. In this paper, the development of opportunistic routing (OR) is presented, starting from the Selection Diversity Forwarding (SDF), Extended Opportunistic (ExOR), Proactive Source Routing (PSR), Cooperative Opportunistic Routing Scheme in Mobile AD Hoc Networks(CORMAN), and the Zone-based Proactive Source Routing(ZPSR). The simulation tests using Network Simulator 2 (ns2) show the effectiveness of Opportunistic Routing protocols among various terms including control overhead, Packet Delivery Ratio (PDR), throughput, and end-to-end delay.

Author 1: Saleh A. Khawatreh
Author 2: Mustafa Abdullah
Author 3: Enas N. Alzubi

Keywords: Opportunistic routing; PSR; ExOR; proactive routing; source routing; tree-based routing; lightweight routing; wireless networks; ad hoc networks

PDF

Paper 41: Optimal Design of a Variable Coefficient Fractional Order PID Controller by using Heuristic Optimization Algorithms

Abstract: This paper deals with an optimal design of a new type Variable coefficient Fractional Order PID (V-FOPID) controller by using heuristic optimization algorithms. Although many studies have mainly paid attention to correct the performance of the system’s transient and steady state responses together, few studies are interested in both transient and steady state performances separately. It is obvious that handling these two cases independently will bring out a better control response. However, there are no studies using different controller parameters for the transient and steady state responses of the system in fractional order control systems. The major contribution of the paper is to fill this gap by presenting a novel approach. To justify the claimed efficiency of the proposed V-FOPID controller, variable coefficient controllers and classical ones are tested through a set of simulations which is about controlling of an Automatic Voltage Regulator (AVR) system. According to the obtained results, first of all it was observed that proposed V-FOPID controller has superiority to the classical PID, Variable coefficient PID (V-PID) and classical Fractional Order PID (FOPID) controllers. Secondly, Particle Swarm Optimization (PSO) algorithm has shown its advantage compared to the Artificial Immune System (AIS) algorithm for the controller design.

Author 1: Omer Aydogdu
Author 2: Mehmet Korkmaz

Keywords: Artificial immune system; automatic voltage regulator; particle swarm optimization; variable coefficient fractional order PID controller

PDF

Paper 42: Speaker Identification based on Hybrid Feature Extraction Techniques

Abstract: One of the most exciting areas of signal processing is speech processing; speech contains many features or characteristics that can discriminate the identity of the person. The human voice is considered one of the important biometric characteristics that can be used for person identification. This work is concerned with studying the effect of appropriate extracted features from various levels of discrete wavelet transformation (DWT) and the concatenation of two techniques (discrete wavelet and curvelet transform ) and study the effect of reducing the number of features by using principal component analysis (PCA) on speaker identification. Backpropagation (BP) neural network was also introduced as a classifier.

Author 1: Feras E. Abualadas
Author 2: Akram M. Zeki
Author 3: Muzhir Shaban Al-Ani
Author 4: Az-Eddine Messikh

Keywords: Speaker identification; biometrics; speaker verification; speaker recognition; text-independent; text-dependent

PDF

Paper 43: ATAM: Arabic Traffic Analysis Model for Twitter

Abstract: Harvesting Twitter for insight and meaning in what is called sentiment analysis (SA) is a major trend stemming from computational linguistics and AI. Industry and academia are interested in maximizing efficiency while mining text to attain the most currently available data and crowdsourcing opinions. In this study, we present the ATAM model for traffic analysis using the data available on Twitter. The model comprises five components that start with data streaming and collection and ends with the road incident prediction through classification. The classification of data is done using a lexicon-based method. The predicted classes are as follows: safe, needs attention, dangerous, and neutral. The data were collected for three months in the city of Riyadh, Saudi Arabia. The model was applied on 10k tweets with an overall accuracy of the model classifying all four classes of 82%.

Author 1: Amani AlFarasani
Author 2: Tahani AlHarthi
Author 3: Sarah AlHumoud

Keywords: Data mining; machine learning; sentiment analysis; unsupervised learning; lexicon-based; support vector machines

PDF

Paper 44: An Effective Approach to Analyze Algorithms with Linear O(n) Worst-Case Asymptotic Complexity

Abstract: A theoretical approach of asymptote analyzes the algorithms for approximate time complexity. The worst-case asymptotic complexity classifies an algorithm to a certain class. The asymptotic complexity for algorithms returns the degree variable of the algorithmic function while ignores the lower terms. In perspective of programming, asymptote only considers the number of iterations in a loop ignoring inside and outside statements. However, every statement must have some execution time. This paper provides an effective approach to analyze the algorithms belonging to the same class of asymptotes. The theoretical analysis of algorithmic functions shows that the difference between theoretical outputs of two algorithmic functions depends upon the difference between their coefficient of ‘n’ and the constant term. The said difference marks the point for the behavioral change of algorithms. This theoretic analysis approach is applied to algorithms with linear asymptotic complexity. Two algorithms are considered having a different number of statements outside and inside the loop. The results positively indicated the effectiveness of the proposed approach as the tables and graphs validates the results of the derived formula.

Author 1: Qazi Haseeb Yousaf
Author 2: Muhammad Arif Shah
Author 3: Rashid Naseem
Author 4: Karzan Wakil
Author 5: Ghufran Ullah

Keywords: Asymptotic complexity; interval analysis; in-depth analysis; Big-Oh; crossover point

PDF

Paper 45: Implementation of Multi-Agent based Digital Rights Management System for Distance Education (DRMSDE) using JADE

Abstract: The main objective of Distance Education (DE) is to spread quality education regardless of time and space. This objective is easily achieved with the help of technology. With the development of World Wide Web and high-speed internet the quality of DE is improved because now Digital Content (DC) can be easily and in no time distributed to many learners of different locations in text, audio and video formats. But, the main obstacle in digital publishing is the protection of Intellectual Property Rights (IPR) of DC. Digital Rights Management (DRM) that manages rights over any digital creation is the only solution to this problem. In this paper, we have made an attempt to implement a Digital Rights Management System for Distance Education known as DRMSDE. We have identified that Multi-Agent System (MAS) based technology is very popular for such type of implementations. Keeping that in mind, we have chosen one of the most popular Multi-Agent based tools, namely JAVA Agent Development Framework (JADE), for our system. This paper presents an overview and the system architecture for the proposed implementation.

Author 1: Ajit Kumar Singh
Author 2: Akash Nag
Author 3: Sunil Karforma
Author 4: Sripati Mukhopadhyay

Keywords: Distance Education (DE); Intellectual Property Rights (IPR); Digital Rights Management (DRM); Multi-Agent System (MAS); JADE

PDF

Paper 46: A Review on Security Issues and their Impact on Hybrid Cloud Computing Environment

Abstract: The evolution of cloud infrastructures toward hybrid cloud models enables innovative business outcomes, twin turbo drivers by the requirement of greater IT agility and overall cost-containment pressures. Hybrid cloud solutions combine the capabilities of both public clouds along with those of on-premises private cloud environments. In order to key benefit with hybrid cloud model, there are different security issues that have been shown to address. In this paper, we explain security issues in detail such as to maintain trust and authenticity of information, Identity management and compliance which is influencing in enterprises due to migration of IT cloud technologies are increasingly turning to hybrid clouds. Here, work outcomes with comparative study of different existing solution and target the common problems domains and security threads.

Author 1: Mohsin Raza
Author 2: Ayesha Imtiaz
Author 3: Umar Shoaib

Keywords: Hybrid cloud; migration; security issues; security techniques

PDF

Paper 47: Enhanced Random Early Detection using Responsive Congestion Indicators

Abstract: Random Early Detection (RED) is an Active Queue Management (AQM) method proposed in the early 1990s to reduce the effects of network congestion on the router buffer. Although various AQM methods have extended RED to enhance network performance, RED is still the most commonly utilized method; this is because RED provides stable performance under various network statuses. Indeed, RED maintains a manageable buffer queue length and avoids congestion resulting from an increase in traffic load; this is accomplished using an indicator that reflects the status of the buffer and a stochastic technique for packet dropping. Although RED predicts congestion, reduces packet loss and avoids unnecessary packet dropping, it reacts slowly to an increase in buffer queue length, making it inadequate to detect and react to sudden heavy congestion. Due to the aforementioned limitation, RED is found to be significantly influenced by the way in which the congestion indicator is calculated and used. In this paper, RED is modified to enhance its performance with various network statuses. RED technique is modified to overcome several disadvantages in the original method and enhance network performance. The results indicate that the proposed Enhanced Random Early Detection (EnRED) and Time-window Augmented RED (Windowed-RED) methods—compared to the original RED, ERED and BLUE methods—enhances network performance in terms of loss, dropping and packet delay.

Author 1: Ahmad Adel Abu-Shareha

Keywords: Congestion; random early detection; active queue management

PDF

Paper 48: Recognition and Classification of Power Quality Disturbances by DWT-MRA and SVM Classifier

Abstract: Electrical power system is a large and complex network, where power quality disturbances (PQDs) must be monitored, analyzed and mitigated continuously in order to preserve and to re-establish the normal power supply without even slight interruption. Practically huge disturbance data is difficult to manage and requires the higher level of accuracy and time for the analysis and monitoring. Thus automatic and intelligent algorithm based methodologies are in practice for the detection, recognition and classification of power quality events. This approach may help to take preventive measures against abnormal operations and moreover, sudden fluctuations in supply can be handled accordingly. Disturbance types, causes, proper and appropriate extraction of features in single and multiple disturbances, classification model type and classifier performance, are still the main concerns and challenges. In this paper, an attempt has been made to present a different approach for recognition of PQDs with the synthetic model based generated disturbances, which are frequent in power system operations, and the proposed unique feature vector. Disturbances are generated in Matlab workspace environment whereas distinctive features of events are extracted through discrete wavelet transform (DWT) technique. Machine learning based Support vector machine classifier tool is implemented for the classification and recognition of disturbances. In relation to the results, the proposed methodology recognizes the PQDs with high accuracy, sensitivity and specificity. This study illustrates that the proposed approach is valid, efficient and applicable.

Author 1: Fayyaz Jandan
Author 2: Suhail Khokhar
Author 3: Syed Abid Ali Shaha
Author 4: Farhan Abbasi

Keywords: Power quality disturbances; discrete wavelet transform; multi resolution analysis; support vector machine

PDF

Paper 49: Smart Parking Architecture based on Multi Agent System

Abstract: Finding a parking space in big cities is becoming more and more impossible. In addition, the emergence of car has created several problems relating to urban mobility for the city. But with the development of technology, these problems can be solved. In this paper, the problem of parking has been addressed by proposing an architecture to automate the parking process using the internet of things, artificial intelligence and multi agent systems.

Author 1: SOFIA BELKHALA
Author 2: SIHAM BENHADOU
Author 3: KHALID BOUKHDIR
Author 4: HICHAM MEDROMI

Keywords: Smart parking; IoT; multi agent system; artificial intelligence; parking availability; IoT application

PDF

Paper 50: Towards Implementing Framework to Generate Myopathic Signals

Abstract: In this paper, we describe a simulation system of myopathicsurface electromyography (sEMG) signals. The architecture of the proposed system consists of two cascading modules. SEMG signals of three pathological skeletal muscles (Biceps Brachii, InterosseursDorsalis, TibalisAnterior) were generated. Root Mean Square (RMS Envelope) and Power Spectral Density (PSD) were used to validate our system.

Author 1: Amira Dridi
Author 2: Jassem Mtimet
Author 3: Slim Yacoub

Keywords: Component; Surface Electromyography (sEMG); myopathy; root mean square; Power Spectral Density (PSD; skeletal muscles; biceps brachii; interosseous dorsalis; tibialis anterior

PDF

Paper 51: Towards the Performance Investigation of Automatic Melanoma Diagnosis Applications

Abstract: Melanoma is a type of skin cancer, one of the fatal diseases, that appears as an abnormal growth of skin cells and the lesion part often looks like a mole on the skin. Early detection of melanoma from skin lesion by means of screening is an important step towards a reduction in mortality. For this purpose, numerous automatic melanoma diagnosis models based on image processing and machine learning techniques are available for computer-based applications (CBA) and smartphone-based applications (SBA). Since, the smartphones are available as most accessible and easiest methods with built-in camera option, SBA are preferred over CBA. In this paper, we explored the available literature and highlighted the challenges of SBA in terms of execution time due to the limited computing power of smartphones. To resolve this issue of storage of the smartphones, we proposed to develop an SBA that can seamlessly process the image data on the cloud instead of local hardware of the smartphone. Therefore, we designed a study to build a machine learning model of melanoma diagnosis to measure the time taken in preprocessing, segmentation, feature extraction, and classification on the cloud and compared the results with the processing time on the smartphone’s local machine. The results showed there is a significant difference of p value <0.001 on the average processing time taken on both environments. As the processing on the cloud is more efficient. The findings of the proposed research will be helpful for the developers to decide the processing platform while developing smartphone applications for automatic melanoma diagnosis.

Author 1: Amna Asif
Author 2: Iram Fatima
Author 3: Adeel Anjum
Author 4: Saif U. R. Malik

Keywords: Smartphones; computer based systems; melanoma diagnosis; cloud computing

PDF

Paper 52: Multi-Objective Ant Colony Optimization for Automatic Social Media Comments Summarization

Abstract: Summarizing social media comments automatically can help users to capture important information without reading the whole comments. On the other hand, automatic text summarization is considered as a Multi-Objective Optimization (MOO) problem for satisfying two conflicting objectives. Retaining the information from the source of text as much as possible and producing the summary length as short as possible. To solve that problem, an undirected graph is created to construct the relation between social media comments. Then, the Multi-Objective Ant Colony Optimization (MOACO) algorithm is applied to generate summaries by selecting concise and important comments from the graph based on the desired summary size. The quality of generated summaries is compared to other text summarization algorithms such as TextRank, LexRank, SumBasic, Latent Semantic Analysis, and KL-Sum. The result showed that MOACO can produce informative and concise summaries which have small cosine distance to the source text and fewer number of words compared to the other algorithms.

Author 1: Lucky
Author 2: Abba Suganda Girsang

Keywords: Automatic text summarization; social media; ant colony optimization; multi-objective

PDF

Paper 53: Classification of Melanoma Skin Cancer using Convolutional Neural Network

Abstract: Melanoma cancer is a type of skin cancer and is the most dangerous one because it causes the most of skin cancer deaths. Melanoma comes from melanocyte cells, melanin-producing cells, so that melanomas are generally brown or black coloured. Melanomas are mostly caused by exposure to ultraviolet radiation that damages the DNA of skin cells. The diagnoses of melanoma cancer are often performed manually by using visuals of the skilled doctors, analyzing the result of dermoscopy examination and match it with medical sciences. Manual detection weakness is highly influenced by human subjectivity that makes it inconsistent in certain conditions. Therefore, a computer assisted technology is needed to help classifying the results of dermoscopy examination and to deduce the results more accurately with a relatively faster time. The making of this application starts with problem analysis, design, implementation, and testing. This application uses deep learning technology with Convolutional Neural Network method and LeNet-5 architecture for classifying image data. The experiment using 44 images data from the training results with a different number of training and epoch resulted the highest percentage of success at 93% in training and 100% in testing, which the number of training data used of 176 images and 100 epochs. This application was created using Python programming language and Keras library as Tensorflow back-end.

Author 1: Rina Refianti
Author 2: Achmad Benny Mutiara
Author 3: Rachmadinna Poetri Priyandini

Keywords: Convolutional neural network; deep learning; image classification; LeNet-5; melanoma skin cancer; python

PDF

Paper 54: Leveraging A Multi-Objective Approach to Data Replication in Cloud Computing Environment to Support Big Data Applications

Abstract: Increased data availability and high data access performance are of utmost importance in a large-scale distributed system such as data cloud. To address these issues data can be replicated in various locations in the system where applications are executed. Replication not only improves data availability and access latency but also improves system load balancing. While data replication in distributed cloud storage is addressed in the literature, majority of the current techniques do not consider different costs and benefits of replication from a comprehensive perspective. In this paper, we investigate replica management problem (which is formulated using dynamic programming) in cloud computing environments to support big data applications. To this end, we propose a new highly distributed replica placement algorithm that provides cost-effective replication of huge amount of geographically distributed data into the cloud to meet the quality of service (QoS) requirements of data-intensive (big data) applications while ensuring that the workload among the replica data centers is balanced. In addition, the algorithm takes into account the consistency among replicas due to update propagation. Thus, we build up a multi-objective optimization approach for replica management in cloud that seeks near optimal solution by balancing the trade-offs among the stated issues. For verifying the effectiveness of the algorithm, we evaluated the performance of the algorithm and compared it with two baseline approaches from the literature. The evaluation results demonstrate the usefulness and superiority of the presented algorithm for conditions of interest.

Author 1: Mohammad Shorfuzzaman
Author 2: Mehedi Masud

Keywords: Big data applications; data cloud; replication; dynamic programming; QoS requirement; workload constraint

PDF

Paper 55: A Categorical Model of Process Co-Simulation

Abstract: A set of dynamic systems in which some entities undergo transformations, or receive certain services in successive phases, can be modeled by processes. The specification of a process consists of a description of the properties of this process as a mathematical object in a suitable modeling language. The language chosen for specifying a process should facilitate the writing of this specification in a very clear and simple form. This raises the need for the use of various types of formalisms that are faithful to the component subsystems of such a system and which are capable of mimicking their varied dynamics. Often in practice, the development of domain specific languages is used to provide building blocks adapted to the processes. Thus, the concept of multi-paradigm modeling arises which involves the combination of different types of models, the decomposition and composition of heterogeneous specified models as well as their simulation. Multi-paradigm modeling presents a variety of challenges such as coupling and transforming the models described in various formalisms, the relationship between models at different levels of abstraction, and the creation of metamodels to facilitate the rapid development of varied formalisms for model specification. The simulation can be seen as a set of state variables that evolve over time. Co-simulation is a synthesis of all simulations of the components of the system, coordinated and synchronized based on interactions between them. The theory of categories provides a framework for organizing and structuring formal systems in which heterogeneous information can be transferred, thus allowing for the building of rigorous cohesion bridges between heterogeneous components. This paper proposes a new model of co-simulation of processes based on the category theory.

Author 1: Daniel-Cristian Craciunean
Author 2: Dimitris Karagiannis

Keywords: Process modeling; metamodel; modeling grammars; categorical grammars; category theory; categorical sketch; co-simulation, simulation

PDF

Paper 56: An Enhanced Concept based Approach for user Centered Health Information Retrieval to Address Readability Issues

Abstract: Searching for relevant medical guidance has turn out to be a general and notable task executed by internet users. This diversity of quantifiable information explorers indicates the enormous range of information needs and consequently, a key prerequisite for the development of clinical retrieval systems that would satisfy the clinical information desires of non-clinical professionals and their care givers. This study focused on designing an enhanced model for clinical consumers balanced based medical information retrievals and also proposed an improved system model that would provide simpler medical meanings for every clinical grammer(s) established on a clinical released documents and clinical search results online. We evaluated and compared the enhanced model with the current models in the clinical domain, namely, QLM (“Query Likelihood Model”), LSI (“Latent Semantic Indexing”) and CBA (“Concept Based Approach”) using MeSH, Metamap and UMLS databases. The outcomes gotten from the investigational study confirmed that, the Enhanced model (ECBA) managed to achieve 0.9145, 0.9170 and 0.9156 on MAP (“Mean Average Precision”), P@10 (“Precision @ 10”) and NDCG@10 (“Normalized Discontinued Cumulative Gains @ 10”) in that order. Hence, the superlative model to be deployed in addressing readability hitches is the Enhanced Concept Based method.

Author 1: Ibrahim Umar Kontagora
Author 2: Isredza Rahmi A. Hamid
Author 3: Nurul Aswa Omar

Keywords: Concept-based approach; medical discharge reports; clinical reports; query expansion; latent semantic indexing; query likelihood model

PDF

Paper 57: Microcontroller-based RFID, GSM and GPS for Motorcycle Security System

Abstract: The crime level including motorcycle theft has been increasing. It occurs regardless the time and place. The owner of the motorcycle needs to ensure the security of his motorcycle by adding either manual or electronic lock. However, both the manual and electronic locks are still incapable to protect the motorcycle from the theft. Based on the problem, this research created an automatic motorcycle safety system, namely, germs narcissistic. Germs narcissist is the key innovation of automatic vehicle security using, GPS (global positioning system), and GSM (global system for mobile communication), and RFID (Radio Frequency Identification). This system was created by using short message service (SMS) to provide vehicle information such as time, position, and alarm informed to the owner of the motorcycle. The arrangement of the technologies can be used as a practical and effective safety key of motorcycle.

Author 1: Kunnu Purwanto
Author 2: Iswanto
Author 3: Tony Khristanto Hariadi
Author 4: Muhammad Yusvin Muhtar

Keywords: Microcontroller; GPS; GSM; RFID; motorcycle

PDF

Paper 58: Efficient Arnold and Singular Value Decomposition based Chaotic Image Encryption

Abstract: This paper proposes an efficient image encryption that is based on Arnold transform (AT) and the Singular value decomposition (SVD). The proposed method employs AT on a plain image to transpose all image pixels in the positions, then a diffusion process is applied to the resulted encrypted image via SVD decomposing into three segments. The decryption process aims to derive the plain image from the cipher image. Matlab simulation experiments are done to examine the suggested method. The achieved results show the superiority of the suggested approach with respect to encryption quality.

Author 1: Ashraf Afifi

Keywords: Encryption arnold transform; singular value decomposition; chaotic image encryption

PDF

Paper 59: Finding Attractive Research Areas for Young Scientists

Abstract: The selection of the research area is very vital for new researchers. One of the major issues for researchers is the selection of the domain of research on which he/she can carry out research. This case is very vital on the grounds that it decides the future of the researchers in that research area. Finding hot and attractive research areas is not considered in the relevant literature of Scientometrics. In this regard, the correct decision of the selection of the research domain helps the researchers to show better performance as well gain a good academic career. The main aim of this research study is to figure out the attractive research areas for the researchers, especially who are at the starting stage of their research life. To the best of our knowledge, this research area is still very limited due to limited work done in this area. So in order to distinguish the attractive research field for the new researchers, new rising fields are identified by applying the well-known g-index, which is widely used for finding the top authors in academic networks. In addition, we compute diverse, relevant features of the research fields which help us to identify top research area. The results demonstrate that the proposed methodology is capable to recommend the attractive research fields for potential future research work. An extensive empirical analysis has been carried out using the widely used academic database of DBLP.

Author 1: Nouman Malik
Author 2: Hikmat Ullah Khan
Author 3: Muhammad Ramzan
Author 4: Muhammad Shahzad Faisal
Author 5: Ahsan Mahmood

Keywords: Research field; scientometrics; attractive areas; g-index

PDF

Paper 60: Improvement in Classification Algorithms through Model Stacking with the Consideration of their Correlation

Abstract: In this research we analyzed the performance of some well-known classification algorithms in terms of their accuracy and proposed a methodology for model stacking on the basis of their correlation which improves the accuracy of these algorithms. We selected; Support Vector Machines (svm), Naïve Bayes (nb), k-Nearest Neighbors (knn), Generalized Linear Model (glm), Latent Discriminant Analysis (lda), gbm, Recursive Partitioning and Regression Trees (rpart), rda, Neural Networks (nnet) and Conditional Inference Trees (ctree) in our research and preformed analyses on three textual datasets of different sizes; Scopus 50,000 instances, IMDB Movie Reviews having 10,000 instances, Amazon Products Reviews having 1000 instances and Yelp dataset having 1000 instances. We used R-Studio for performing experiments. Results show that the performance of all algorithms increased at Meta level. Neural Networks achieved the best results with more than 25% improvement at Meta-Level and outperformed the other evaluated methods with an accuracy of 95.66%, and altogether our model gives far better results than individual algorithms’ performance.

Author 1: Muhammad Azam
Author 2: Dr. Tanvir Ahmed
Author 3: Dr. M. Usman Hashmi
Author 4: Rehan Ahmad
Author 5: Abdul Manan
Author 6: Muhammad Adrees
Author 7: Fahad Sabah

Keywords: Classification algorithms; model stacking; correlation; k-nearest neighbor; pre-processing; meta classifiers

PDF

Paper 61: Image Retrieval using Visual Phrases

Abstract: Keypoint based descriptors are widely used for various computer vision applications. During this process, key-points are initially detected from the given images which are later represented by some robust and distinctive descriptors like scale-invariant feature transform (SIFT). Keypoint based image-to-image matching has gained significant accuracy for image retrieval type of applications like image copy detection, similar image retrieval and near duplicate detection. Local keypoint descriptors are quantized into visual words to reduce the feature space which makes image-to-image matching possible for large scale applications. Bag of visual word quantization makes it efficient at the cost of accuracy. In this paper, the bag of visual word model is extended to detect frequent pair of visual words which is known as frequent item-set in text processing, also called visual phrases. Visual phrases increase the accuracy of image retrieval without increasing the vocabulary size. Experiments are carried out on benchmark datasets that depict the effectiveness of proposed scheme.

Author 1: Benish Anwar
Author 2: Junaid Baber
Author 3: Atiq Ahmed
Author 4: Maheen Bakhtyar
Author 5: Sher Muhammad Daudpota
Author 6: Anwar Ali Sanjrani
Author 7: Ihsan Ullah

Keywords: Image processing; image retrieval; visual phrases; apriori algorithm; SIFT

PDF

Paper 62: An Improved Particle Swarm Optimization Algorithm with Chi-Square Mutation Strategy

Abstract: Particle Swarm Optimization (PSO) algorithm is a population-based strong stochastic search strategy empowered from the inherent way of the bee swarm or animal herds for seeking their foods. Consequently, flexibility for the numerical experimentation, PSO has been used to resolve diverse kind of optimization problems. PSO is much of the time caught in local optima in the meantime taking care of the complex real-world problems.Considering this, a novel modified PSO is introduced by proposing a chi square mutation method. The main functionality of mutation operator in PSO is quick convergence and escapes from the local minima. Population initialization plays a critical role in meta-heuristic algorithm. Moreover, in this work, to improve the convergence, rather applying random distribution for initialization, two quasi random sequences Halton and Sobol have been applied and properly joined with chi-square mutated PSO (Chi-Square PSO) algorithm. The promising experimental result suggests the superiority of the proposed technique. The results present foresight that how the proposed mutation operator influences on the value of cost function and divergence. The proposed mutated strategy is applied for eight (8) benchmark functions extensively used in the literature. The simulation results verify that Chi-Square PSO provide efficient results over other tested algorithms implemented for the function optimization.

Author 1: Waqas Haider Bangyal
Author 2: Hafiz Tayyab Rauf
Author 3: Hafsa Batool
Author 4: Saad Abdullah Bangyal
Author 5: Jamil Ahmed
Author 6: Sobia Pervaiz

Keywords: Particle Swarm Optimization; Chi-Square Mutation; Population Initialization

PDF

Paper 63: Real Time Analysis of Crowd Behaviour for Automatic and Accurate Surveillance

Abstract: Surveillance in this modern era is a necessity. Creating an alert in case of emergencies and disturbances is of very much importance. As the number of simultaneous camera feeds increase, burden on human supervisor also increases. The proposed system is a way to aid human supervisor in the surveillance job. Creating alerts in real time will help responding quickly to crucial situations. With this in mind, we propose the following things: (1) Generation of ViF (Violent Flow Descriptors) as high-level features in real time. (2) Using generated ViF’s of a Video Dataset for training a neural net and testing its accuracy.(3) Developing a system that can detect the signs of disturbance among the crowd in real time and can learn from the decisions it makes.

Author 1: E Padmalatha
Author 2: Karedla Anantha Sashi Sekhar
Author 3: Dasarada Ram Reddy Mudiam

Keywords: Real time surveillance; violent flow descriptors; neural network

PDF

Paper 64: Impacts of Unbalanced Test Data on the Evaluation of Classification Methods

Abstract: The performance of a classifier in a supervised machine learning problem is popularly evaluated by using the accuracy, precision, recall, and F1-score. These parameters could evaluate very well classifiers in the case that the number of positive label sample and the number of negative label sample in the testing set are balanced or nearly balanced. However, these parameters may miss-evaluate the classifiers in some case where the positive and negative samples in the testing set is unbalanced. This paper proposes some update in these parameters by taking into account the unbalanced factor which represents the unbal-ance ratio of positive and negative samples in the testing set. The new updated parameters are then experimentally evaluated to compare to the traditional parameters.

Author 1: Manh Hung Nguyen

Keywords: Supervised machine learning evaluation; accuracy; f1 score; unbalanced factor

PDF

Paper 65: A Hybrid Exam Scheduling Technique based on Graph Coloring and Genetic Algorithms Targeted towards Student Comfort

Abstract: Scheduling is one of the vital activities needed in various aspects of life. It is also a key factor in generating exam schedules for academic institutions. In this paper we propose an exam scheduling technique that combines graph coloring and genetic algorithms. On one hand, graph coloring is used to order sections such that sections that are difficult to schedule comes first and accordingly scheduled first which helps in increasing the probability of generating valid schedules. On the other hand, we use genetic algorithms to search more effectively for more optimized schedules within the large search space. We propose a two-stage fitness function that is targeted toward increasing student comfort. We also investigate the effect and potency of the crossover operator and the mutation operator. Our experiments are conducted on a realistic dataset and the results show that a mutation only hybrid approach has a low cost and converges faster toward more optimized schedules.

Author 1: Osama Al-Haj Hassan
Author 2: Osama Qtaish
Author 3: Maher Abuhamdeh
Author 4: Mohammad Al-Haj Hassan

Keywords: Exam scheduling; optimization; graph coloring; genetic algorithms; time tabling; fitness value

PDF

Paper 66: Developing a Framework for Analyzing Heterogeneous Data from Social Networks

Abstract: Due to the rapid growth of internet technologies, at present online social networks have become a part of people’s everyday life. People shares their thoughts, feelings, likings, disliking and many other issues at social networks by posting messages, videos, images and commenting on these. It is a great source of heterogeneous data. Heterogeneous data is a kind of unstructured data which comes in a variety of forms with an uncertain speed. In this paper, we develop a framework to collect and analyze a significant amount of heterogeneous data obtained from the social network to understand the behavioural patterns of the people at the social networks. In our framework, at first we crawl data from a well-known social network through Graph API that contains post, comments, images and videos. We compute keywords from the users’ comments and posts and separate keywords as noun, verb, and adjective with the help of an XML based parts of speech tagger. We analyze images related to each user to find out how a user like to move. For this purpose, we count the number of users in an image using frontal face detection classifier. We also analyze video files of the users to find the categories of videos. For this purpose, we divide each video into frames and measure the RGB properties, speed, duration, frame’s height and width. Finally, for each user we combine information from text, images and videos and based on the combined information we develop the profile of the user. Then, we generate recommendations for each user based on activities of the user and cosine similarity between users. We perform several experiments to show the effectiveness of our developed system. From the experimental evaluation, we can say that our framework can generate results up to a satisfactory level.

Author 1: Aritra Paul
Author 2: Mohammad Shamsul Arefin
Author 3: Rezaul Karim

Keywords: Heterogeneous data; recommendation systems; co-sine similarity; video categorization

PDF

Paper 67: Energy Efficient Camera Solution for Video Surveillance

Abstract: Video surveillance is growing rapidly, new problems and issues are also coming into view which needs serious and urgent attention. Video surveillance system requires a beneficial energy efficient camera solution. In this paper, a single overhead camera solution is introduced which overcomes the problems ex-isting in various frontal and overhead based surveillance systems. This will increases the efficiency and accuracy of surveillance system i.e. frontal and overhead. In this paper, two energy efficient overhead camera models are presented. The first model consists of a single overhead camera with a wide angle lens which covers a wide field of view addresses problems present in the traditional surveillance system. The second model, presents a single smart centralized overhead camera which controls various frontal based cameras. Several factors associated with camera models such as field of view, focal length and distortion are also discussed. Impact of the surveillance cameras are finally discussed which shows that a single energy efficient overhead camera surveillance system can solves many problems present in traditional surveillance system like power consumption, storage, time, human resource and installation cost and small coverage area.

Author 1: Misbah Ahmad
Author 2: Imran Ahmed
Author 3: Kaleem Ullah
Author 4: Iqbal Khan
Author 5: Ayesha Khattak
Author 6: Awais Adnan

Keywords: Energy efficient; video surveillance; overhead camera

PDF

Paper 68: A Gender-neutral Approach to Detect Early Alzheimer’s Disease Applying a Three-layer NN

Abstract: Early diagnosis of the neurodegenerative, irreversible disease Alzheimer’s is crucial for effective disease management. Dementia from Alzheimer’s is an agglomerated result of complex criteria taking roots at both medical, social, educational backgrounds. There being multiple predictive features for the mental state of a subject, machine learning methodologies are ideal for classification due to their extremely powerful feature-learning capabilities. This study primarily attempts to classify subjects as having or not having the early symptoms of the disease and on the sidelines, endeavors to detect if a subject has already transformed towards Alzheimer’s. The research utilizes the OASIS (Open Access Series of Imaging Studies) longitudinal dataset which has a uniform distribution of demented, nondemented subjects and establishes the use of novel features such as socio-economic status and educational background for early detection of dementia, proven by performing exploratory data analysis. This research exploits three data-engineered versions of the OASIS dataset with one eliminating the incomplete cases, another one with synthetically imputed data and lastly, one that eliminates gender as a feature—eventually producing the best results and making the model a gender-neutral unique piece. The neural network applied is of three layers with two ReLU hidden layers and a third softmax classification layer. The best accuracy of 86.49% obtained on cross-validation set upon trained parameters is greater than traditional learning algorithms applied previously on the same data. Drilling down to two classes namely demented and non-demented, 100% accuracy has been remarkably achieved. Additionally, perfect recall and a precision of 0.8696 for the ‘demented’ class have been achieved. The significance of this work consists in endorsing educational, socio-economic factors as useful features and eliminating the gender-bias using a simple neural network model without the need for complete MRI tuples that can be compensated for using specialized imputation methods.

Author 1: Shithi Maitra
Author 2: Tonmoy Hossain
Author 3: Abdullah Al-Sakin
Author 4: Sheikh Inzamamuzzaman
Author 5: Md. Mamun Or Rashid
Author 6: Syeda Shabnam Hasan

Keywords: Alzheimer’s disease; dementia; exploratory data analysis; synthetically imputed data; socio-economic factors; specialized imputation

PDF

Paper 69: Optimal Pragmatic Clustering for Wireless Networks

Abstract: Nodes’ clustering in wireless networks is one of the solutions that used to improve network performance. This paper discusses the clustering in wireless networks. Then it presents a novel clustering algorithm named Pragmatic Genetic Algorithm (PGA). It combines two of the well known artificial intelligence techniques: K-means and Genetic algorithm. The proposed algorithm aims at minimizing the execution time of the clustering, especially in time-sensitive wireless networks applications. The performance of PGA has been compared with the classical clustering algorithms, namely, K-means and KGA. The experiments have been conducted using synthetic and real data from public repositories. PGA obtained excellent results in execution and stable accuracy even when the number of nodes was increased.

Author 1: Suzan Basloom
Author 2: Nadine Akkari
Author 3: Ghadah Aldabbagh

Keywords: Clustering; genetic algorithm; K-means; wireless networks

PDF

Paper 70: Analysis of ECG Signal Processing and Filtering Algorithms

Abstract: Electrocardiography (ECG) is a common technique for recording the electrical activity of human heart. Accurate computer analysis of ECG signal is challenging as it is exceedingly prone to high frequency noise and various other artifacts due to its low amplitude. In remote heath care systems, computer based high level understanding of ECG signals is performed using advanced machine learning algorithms. The accuracy of these algorithms relies on the Signal-to-Noise-Ratio (SNR) of the input ECG signal. In this paper, we analyse various methods for removing the high frequency noise components from the ECG signal and evaluate the performance of several adaptive filtering algorithms. The result suggest that the Normalized Least Mean Square (NLMS) algorithm achieves high SNR and Sign LMS is computationally efficient.

Author 1: Zia-ul-Haque
Author 2: Rizwan Qureshi
Author 3: Mehmood Nawaz
Author 4: Faheem Yar Khuhawar
Author 5: Nazish Tunio
Author 6: Muhammad Uzair

Keywords: Electrocardiogram; power line interference; electromyography; adaptive filter; Least Mean Square

PDF

Paper 71: Non-Linear EH Relaying in Delay-Transmission Mode over n-µ Fading Channels

Abstract: Energy harvesting is a technique to harvest energy from RF (radio frequency) waves. The RF signals have the ability to convey energy and information concurrently. The EH in coop-erative relaying systems may increase the capacity and coverage of wireless networks. In this work, we study a dual-hop (two-hop) relaying. This system has three nodes: a source, a relay, and a destination. The source and destination have multiple antennas. We account a non-linear EH model and TSR (time-switching-based relaying) protocol at the single-antenna relay node. We evaluate the system performance over η−µ fading channels. With a saturation threshold, a non-linear EH receiver restrains the harvested power. In the TSR protocol, the relay changes mode between the EH and information processing, by which a fraction of time is used with each process. The fading model η−µ incorporates some fading models as notable cases, viz., Nakagami- m, One-sided Gaussian, Nakagami-q (Hoyt), and Rayleigh. The system performance is analyzed in terms of the average capacity and throughput for different saturation threshold power levels, divers antennas arrangements, and different parameter values of η and µ.

Author 1: Ayaz Hussain
Author 2: Inayat Ali
Author 3: Ramesh Kumar
Author 4: Zahoor Ahmed

Keywords: EH relay; non-linear EH model; n-µ fading; TSR protocol; throughput

PDF

Paper 72: Exploring Mechanisms for Pattern Formation through Coupled Bulk-Surface PDEs in Case of Non-linear Reactions

Abstract: This work explores mechanisms for pattern forma-tion through coupled bulk-surface partial differential equations of reaction-diffusion type. Reaction-diffusion systems posed both in the bulk and on the surface on stationary volumes are coupled through linear Robin-type boundary conditions. The presented work in this paper studies the case of non-linear reactions in the bulk and surface, respectively. For the investigated system is non-dimensionalised and rigorous linear stability analysis is carried out to determine the necessary and sufficient conditions for pattern formation. Appropriate parameter spaces are generated from which model parameters are selected. To exhibit pattern formation, a coupled bulk-surface finite element method is devel-oped and implemented. The numerical algorithm is implemented using an open source software package known as deal.II and show computational results on spherical and cuboid domains. Also, theoretical predictions of the linear stability analysis are verified and supported by numerical simulations. The results show that non-linear reactions in the bulk and surface generate patterns everywhere.

Author 1: Muflih Alhazmi

Keywords: Bulk-surface; Reaction-diffusion; Finite-Element-Method (FEM); Partial Differential Equations (PDEs)

PDF

Paper 73: Efficient Load Balancing in Cloud Computing using Multi-Layered Mamdani Fuzzy Inference Expert System

Abstract: In this article, a new Multi-Layered mamdani fuzzy inference system (ML-MFIS) is propound for the Assessment of Efficient Load Balancing (ELB). The proposed ELB-ML-MFIS expert System can categorise the level of ELB in Cloud computing into Excellent, Normal or Low. ELB-ML-MFIS Expert System for ELB in cloud computing is developed under the guidelines from the Microsoft Organization and Pakistan’s Punjab Information Technology Board (PITB) Standard. ELB-ML-MFIS Expert System uses input Cloud Computing parameters such as Data-Center, Virtual-Machine, and Inter –of-Things (IOT) for different layers. This article also analyses the intensities of the Parametres and the results achieved by using the Proposed ELB-ML-MFIS Expert System. All these parameters and results are discussed with the experts of Pakistan’s Punjab Information Technology Board (PITB), Lahore. The accuracy of the proposed ELB-ML-MFIS Expert System is more accurate as compared to other approaches used for it.

Author 1: Naila Samar Naz
Author 2: Sagheer Abbas
Author 3: Muhammad Adnan Khan
Author 4: Benish Abid
Author 5: Nadeem Tariq
Author 6: Muhammad Farrukh Khan

Keywords: PITB; IOT; Virtual-Machine; Data-center; ML, ELB; MFIS

PDF

Paper 74: Performance Analysis of Multilayer Perceptron Neural Network Models in Week-Ahead Rainfall Forecasting

Abstract: Multilayer perceptron neural network (MLPNN) is considered as one of the most efficient forecasting techniques which can be implemented for the prediction of weather occurrence. As with any machine learning implementation, the challenge on the utilization of MLPNN in rainfall forecasting lies in the development and evaluation of MLPNN models which delivers optimal forecasting performance. This research conducted performance analysis of MLPNN models through data preparation, model designing, and model evaluation in order to determine which parameters are the best-fit configurations for MLPNN model implementation in rainfall forecasting. During rainfall data preparation, imputation process and spatial correlation evaluation of weather variables from various weather stations showed that the geographical location of the chosen weather stations did not have a direct correlation between stations with respect to rainfall behavior leading to the decision of utilizing the weather station having the most complete weather data to be fed in the MLPNN. By conducting performance analysis of MLPNN models with different combinations of training algorithms, activation functions, learning rate, and momentum, it was found out that MLPNN model having 100 hidden neurons with Scaled Conjugate Gradient training algorithm and Sigmoid activation function delivered the lowest RMSE of 0.031537 while another MLPNN model having the same number of hidden neurons, the same activation function but Resilient Propagation as training algorithm had the lowest MAE of 0.0209. The results of this research showed that performance analysis of MLPNN models is a crucial process in model implementation of MLPNN for week-ahead rainfall forecasting.

Author 1: Lemuel Clark P. Velasco
Author 2: Ruth P. Serquiña
Author 3: Mohammad Shahin A. Abdul Zamad
Author 4: Bryan F. Juanico
Author 5: Junneil C. Lomocso

Keywords: Multilayer perceptron neural network; performance analysis; rainfall forecasting

PDF

Paper 75: Android based Receptive Language Tracking Tool for Toddlers

Abstract: Today’s Android-based applications are gaining more popularity among users, especially among kids. Many Android-based applications are available related to speech therapy of a child but these have left some loopholes. Talking kids is the solution to those applications. It is an Android-based receptive language tracking tool for toddlers that emphasis to improve child’s hearing capability and helps to learn, understand and develop receptive language vocabulary. It includes the colourful images of the daily routine things with their sound in a native accent so that child can learn the daily routine items. Child assessment is also included in this application for monitoring child performance. On the basis of child assessment, the activity log is maintained for keeping track of the child performance. The collected results are showing the successful development of receptive language vocabulary in toddlers with the help of ‘Talking kids’.

Author 1: Sadia Firdous
Author 2: Madhia Wahid
Author 3: Amad Ud Din
Author 4: Khush Bakht
Author 5: Muhammad Yousuf Ali Khan
Author 6: Rehna Batool
Author 7: Misbah Noreen

Keywords: Receptive language; mobile application; hearing impairment

PDF

Paper 76: An Agent-based Simulation for Studying Air Pollution from Traffic in Urban Areas: The Case of Hanoi City

Abstract: In urban areas, traffic is one of the main causes of air pollution. Establishing an effective solution to raise public awareness of this phenomenon could help to significantly reduce the level of pollution in urban areas. In this study, we design and implement an agent-based simulation allowing to study the principles of production and dispersion of pollutants from road traffic in urban areas. The simulation takes into account different factors that can produce pollutants from the urban zone (the case of Hanoi city in Vietnam): roads and streets, vehicles (types, quantity), traffic, wind direction, etc. With this simulation, one can observe and study the emission and dispersion of pollutants from traffic by conducting experiments with various scenarios and parameters. This work is an interesting solution to sensitize the public’s awareness on the air pollution from traffic in urban areas, so that people can change their behaviors to reduce the air pollution.

Author 1: KAFANDO Rodrique
Author 2: HO Tuong Vinh
Author 3: NGUYEN Manh Hung

Keywords: Air pollution; agent-based simulation; traffic

PDF

Paper 77: Spin-Then-Sleep: A Machine Learning Alternative to Queue-based Spin-then-Block Strategy

Abstract: One of the issues with spinlock protocols is excessive spinning which results in a waste of CPU cycles. Some protocols use the hybrid, spin-then-block approach to avoid this problem. In this case, the contending thread may prefer relinquishing the CPU instead of spinning, and resumes execution once notified. This paper presents a machine learning framework for intelligent sleeping and spinning as an alternative to the spin-then-block strategy. This framework can be used to address one of the challenges faced by this strategy: the delay in the critical path. The work suggests a reinforcement learning based approach for queue-based locks that aims at having threads learn to spin or sleep. The challenges of the suggested technique and future work are also discussed.

Author 1: Fadai Ganjaliyev

Keywords: Spinlock; spin-then-block; reinforcement learning; queue-based lock; intelligent sleeping

PDF

Paper 78: Triangle Hyper Hexa-cell Interconnection Network A Novel Interconnection Network

Abstract: The interconnection networks play the main role in many applications, because it has a direct influence on it. Nowadays; the challenge is to find suitable topology that can deal with fewer requirements and min-cost. One of the most famous interconnection structures is the cube; it is used to build many interconnection networks such as the cube Hyper hexa cell topology. This work proposes a new topology; a hybrid topology between hyper hexa cell topology and triangle topology. In the propose interconnection network, the focus on the diameter which is less than the cube hyper hexa cell within one in any dimension and this effect on many parameters such as execution time. In the simulation environment, the radix sort being applied on the suggested Interconnection network using dimension number two on both; Triangle Hyper Hexa-cell and Cube Hyper Hexa-cell. Depending on the comparison between both topologies; in Theoretical and practical. The result shows the best performance for Triangle Hyper Hexa-cell. Practically; the measured parameter was the execution time in the simulation environment. Theoretically, the topological properties for both have been measured and got the equations for both, such as: number of nodes in every dimension, the number of edges, the network degree, and the diameter. This architecture will promise to be useful, more powerful for new-generation parallel architectures, and more effective for many applications and can be applied in different fields.

Author 1: Asmaa Aljawawdeh
Author 2: Esraa Emriziq
Author 3: Saher Manaseer

Keywords: Interconnection network; hyper hexa-cell; parallel system; radix sort; triangle hyper hexa-cell; triangle topology

PDF

Paper 79: Growth Characteristics of Age and Gender-based Anthropometric Data from Human Assisted Remote Healthcare System

Abstract: Growth monitoring and promotion of optimal growth are essential components of primary health care. The most popular approach to this topic has been developed and utilized for decades by the CDC (Center for Disease Control and Prevention) in the United States, resulting in its well-known clinical growth pattern charts for boys and girls. This metric comprises a series of percentile curves that illustrate the distribution of selected body measurements, by age. The results show a clear uptrend of three traditional measures: height, weight, and BMI. The chart also shows a trend with the corresponding 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentile data variations. Unfortunately, the CDC metric system only addresses ages 2–20 years. Apparently, no other studies show correspondingly systematic growth characteristic patterns for humans more than 20 years old. Our Portable Health Clinic system has for many years been archiving remote health care data records collected from different age and socioeconomic levels in many locations throughout Bangladesh. This data provides an important resource with which to study the age-related evolving nature of anthropometric data. We aim to see if there are any significant clinical growth patterns, specifically regarding height, weight, BMI, waist, and hip for humans over the age of 20 years. We could not determine a clear indication in terms of a specific age where the significant change from growth to decline occurs.

Author 1: Mehdi Hasan
Author 2: Rafiqul Islam
Author 3: Fumuhiko Yokota
Author 4: Mariko Nishikitani
Author 5: Akira Fukuda
Author 6: Ashir Ahmed

Keywords: Age and gender-based growth characteristics; portable health clinic; human assisted remote healthcare system; eHealth record

PDF

Paper 80: Density based Clustering Algorithm for Distributed Datasets using Mutual k-Nearest Neighbors

Abstract: Privacy and security have always been a concern that prevents the sharing of data and impedes the success of many projects. Distributed knowledge computing, if done correctly, plays a key role in solving such a problem. The main goal is to obtain valid results while ensuring the non-disclosure of data. Density-based clustering is a powerful algorithm in analyzing uncertain data that naturally occur and affect the performance of many applications like location-based services. Nowadays, a huge number of datasets have been introduced for researchers which involve high-dimensional data points with varying densities. Such datasets contain data points with high-density regions surrounded by data points with sparse density. The existing clustering approaches handle these situations inefficiently, especially in the context of distributed data. In this paper, we design a new decomposable density-based clustering algorithm for distributed datasets (DDBC). DDBC utilizes the concept of mutual k-nearest neighbor relationship to cluster distributed datasets with different density. The proposed DDBC algorithm is capable of preserving the privacy and security of data on each site by requiring a minimal number of transmissions to other sites.

Author 1: Ahmed Salim
Author 2:

Keywords: Privacy; mutual k-nearest neighbor; Density-based; clustering; security; DDBC

PDF

Paper 81: Development of Talent Model based on Publication Performance using Apriori Technique

Abstract: The main problem or challenge faced by Human Resource Management (HRM) is to recognize, develop and manage talent efficiently and effectively. This is because HRM is responsible for selecting the correct talent for suitable positions at the right time, aligned with their existing qualifications, talents and achievements. Furthermore, the decision in identifying talent for a position must be fair, truthful and appropriate. In the academic field, publication is a core component in the evaluation of academic talent that is affected by research, supervision and conference. Therefore, this study proposed an academic talent model based on publication factor using the Apriori technique for the purpose of promotion. This study applies the Apriori based Association Rules algorithm to identify a set of meaningful rules for the assessment of significant relevant talents for the promotion of academic staff in local universities. The findings have successfully developed a model based on talent acquisition of knowledge related to the issuance and have been evaluated by comparing the guidelines for the promotion of academic experts. This knowledge helps to improve the quality of the evaluation process of academic talent management and future planning in HRM.

Author 1: Zulaiha Ali Othman
Author 2: Noraini Ismail
Author 3: Mohd Zakree Ahmad Nazri
Author 4: Hamidah Jantan

Keywords: Human resource management; apriori based association rules; promotion’s guideline

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org