The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Metadata Harvesting (OAI2)
  • Digital Archiving Policy
  • Promote your Publication

IJACSA

  • About the Journal
  • Call for Papers
  • Author Guidelines
  • Fees/ APC
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Guidelines
  • Fees
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Subscribe

IJACSA Volume 8 Issue 6

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Intelligent Security for Phishing Online using Adaptive Neuro Fuzzy Systems

Abstract: Anti-phishing detection solutions employed in industry use blacklist-based approaches to achieve low false-positive rates, but blacklist approaches utilizes website URLs only. This study analyses and combines phishing emails and phishing web-forms in a single framework, which allows feature extraction and feature model construction. The outcome should classify between phishing, suspicious, legitimate and detect emerging phishing attacks accurately. The intelligent phishing security for online approach is based on machine learning techniques, using Adaptive Neuro-Fuzzy Inference System and a combination sources from which features are extracted. An experiment was performed using two-fold cross validation method to measure the system’s accuracy. The intelligent phishing security approach achieved a higher accuracy. The finding indicates that the feature model from combined sources can detect phishing websites with a higher accuracy. This paper contributes to phishing field a combined feature which sources in a single framework. The implication is that phishing attacks evolve rapidly; therefore, regular updates and being ahead of phishing strategy is the way forward.

Author 1: G. Fehringer
Author 2: P. A. Barraclough

Keywords: Phishing websites; fuzzy models; feature model; intelligent detection; neuro fuzzy; fuzzy inference system

Download PDF

Paper 2: Multispectral Image Analysis using Decision Trees

Abstract: Many machine learning algorithms have been used to classify pixels in Landsat imagery. The maximum likelihood classifier is the widely-accepted classifier. Non-parametric methods of classification include neural networks and decision trees. In this research work, we implemented decision trees using the C4.5 algorithm to classify pixels of a scene from Juneau, Alaska area obtained with Landsat 8, Operation Land Imager (OLI). One of the concerns with decision trees is that they are often over fitted with training set data, which yields less accuracy in classifying unknown data. To study the effect of overfitting, we have considered noisy training set data and built decision trees using randomly-selected training samples with variable sample sizes. One of the ways to overcome the overfitting problem is pruning a decision tree. We have generated pruned trees with data sets of various sizes and compared the accuracy obtained with pruned trees to the accuracy obtained with full decision trees. Furthermore, we extracted knowledge regarding classification rules from the pruned tree. To validate the rules, we built a fuzzy inference system (FIS) and reclassified the dataset. In designing the FIS, we used threshold values obtained from extracted rules to define input membership functions and used the extracted rules as the rule-base. The classification results obtained from decision trees and the FIS are evaluated using the overall accuracy obtained from the confusion matrix.

Author 1: Arun Kulkarni
Author 2: Anmol Shrestha

Keywords: Decision trees; knowledge extraction; fuzzy inference system; Landsat imagery

Download PDF

Paper 3: Sentiment Analysis on Twitter Data using KNN and SVM

Abstract: Millions of users share opinions on various topics using micro-blogging every day. Twitter is a very popular micro-blogging site where users are allowed a limit of 140 characters; this kind of restriction makes the users be concise as well as expressive at the same time. For that reason, it becomes a rich source for sentiment analysis and belief mining. The aim of this paper is to develop such a functional classifier which can correctly and automatically classify the sentiment of an unknown tweet. In our work, we propose techniques to classify the sentiment label accurately. We introduce two methods: one of the methods is known as sentiment classification algorithm (SCA) based on k-nearest neighbor (KNN) and the other one is based on support vector machine (SVM). We also evaluate their performance based on real tweets.

Author 1: Mohammad Rezwanul Huq
Author 2: Ahmad Ali
Author 3: Anika Rahman

Keywords: Support Vector Machine (SVM); k-nearest neighbor (KNN); Grid Search; Confusion matrix; ROC graph; Hyperplane; Social data analysis

Download PDF

Paper 4: Handwritten Digit Recognition based on Output-Independent Multi-Layer Perceptrons

Abstract: With handwritten digit recognition being an established and significant problem that is facing computer vision and pattern recognition, there has been a great deal of research work that has been undertaken in this area. It is not a trivial task because of the big variation that exists in the writing styles that have been found in the available data. Therefore both, the features and the classifier need to be efficient. The core contribution of this research is the development of a new classification technique that is based on the MLP, which can be identified in handwritten documents as the binary digits ‘0’ and ‘1’. This technique maps the different sets of various input data onto the MLP output neurons. An experimental evaluation of the technique’s performance is provided. This evaluation is based on the well-known ‘Pen-Based Recognition of Handwritten Digits’ dataset, which is comprised of a total of 250 handwriting samples that are taken from 44 writers. The results obtained are very promising for such an approach in accurate handwriting recognition.

Author 1: Ismail M. Keshta

Keywords: Handwritten digit recognition; Pattern classification; Neural network mode; Two-class classification; Accuracy; Binary data

Download PDF

Paper 5: Process Improvements for Crowdsourced Software Testing

Abstract: Crowdsourced software testing has been a common practice lately. It refers to the use of crowdsourcing in software testing activities. Although the crowd testing is a collaborative process by nature, there is no available research that provides a critical assessment of the key collaboration activities offered by the current crowdsourced testing platforms. In this paper, we review the process used in the crowd testing platforms including identifying the workflow used in managing the crowd testing process starting from submitting testing requirements and ending with reviewing testing report. Understanding the current process is then utilized to identify a set of limitations in the current process and has led to propose three process improvements (improving assigning crowd manager, improving building test team, monitoring testing progress). We have designed and implemented these process improvements and then evaluated them using two techniques: 1) questionnaire and 2) workshop. The questionnaire shows that the process improvements are significantly sound and strong enough to be added to crowd testing platforms. In addition, the evaluation through conducting a workshop was useful to assess the design and implementation of the process improvements. The participants were satisfied with them but asked for further modifications. Nevertheless, because crowd testing requires participation from a large number of people, the automation suggested improving managing the current process which was highly appreciated.

Author 1: Sulta Alyahya
Author 2: Dalal Alrugebh

Keywords: Software testing; crowdsourcing, crowd testing; process improvement; tool

Download PDF

Paper 6: Glaucoma-Deep: Detection of Glaucoma Eye Disease on Retinal Fundus Images using Deep Learning

Abstract: Detection of glaucoma eye disease is still a challenging task for computer-aided diagnostics (CADx) systems. During eye screening process, the ophthalmologists measures the glaucoma by structure changes in optic disc (OD), loss of nerve fibres (LNF) and atrophy of the peripapillary region (APR). In retinal images, the automated CADx systems are developed to assess this eye disease through segmentation-based hand-crafted features. Therefore in this paper, the convolutional neural network (CNN) unsupervised architecture was used to extract the features through multilayer from raw pixel intensities. Afterwards, the deep-belief network (DBN) model was used to select the most discriminative deep features based on the annotated training dataset. At last, the final decision is performed by softmax linear classifier to differentiate between glaucoma and non-glaucoma retinal fundus image. This proposed system is known as Glaucoma-Deep and tested on 1200 retinal images obtained from publically and privately available datasets. To evaluate the performance of Glaucoma-Deep system, the sensitivity (SE), specificity (SP), accuracy (ACC), and precision (PRC) statistical measures were utilized. On average, the SE of 84.50%, SP of 98.01%, ACC of 99% and PRC of 84% values were achieved. Comparing to state-of-the-art systems, the Nodular-Deep system accomplished significant higher results. Consequently, the Glaucoma-Deep system can easily recognize the glaucoma eye disease to solve the problem of clinical experts during eye-screening process on large-scale environments.

Author 1: Qaisar Abbas

Keywords: Fundus imaging; glaucoma; diabetic retinopathy; deep learning; convolutional neural networks; deep belief network

Download PDF

Paper 7: GPC Temperature Control of A Simulation Model Infant-Incubator and Practice with Arduino Board

Abstract: The thermal environment surrounding preterm neonates in closed incubators is regulated via air temperature control mode. At present, these control modes do not take account of all the thermal parameters involved in a pattern of incubator such as the thermal parameters of preterm neonates (birth weight < 1000grams). The objective of this work is to design and validate a generalized predictive control (GPC) that takes into account the closed incubator model as well as the newborn premature model. Then, we implemented this control law on a DRAGER neonatal incubator with and without newborn using microcontroller card. Methods: The design of the predictive control law is based on a prediction model. The developed model allows us to take into account all the thermal exchanges (radioactive, conductive, convective and evaporative) and the various interactions between the environment of the incubator and the premature newborn. Results: The predictive control law and the simulation model developed in Matlab/Simulink environment make it possible to evaluate the quality of the mode of control of the air temperature to which newborn must be raised. The results of the simulation and implementation of the air temperature inside the incubator (with newborn and without newborn) prove the feasibility and effectiveness of the proposed GPC controller compared with a proportional–integral–derivative controller (PID controller).

Author 1: E. Feki
Author 2: M. A. Zermani
Author 3: A. Mami

Keywords: Incubator; neonatal; model; temperature; Arduino; GPC

Download PDF

Paper 8: Intelligent Hybrid Approach for Android Malware Detection based on Permissions and API Calls

Abstract: Android malware is rapidly becoming a potential threat to users. The number of Android malware is growing exponentially; they become significantly sophisticated and cause potential financial and information losses for users. Hence, there is a need for effective and efficient techniques to detect the Android malware applications. This paper proposes an intelligent hybrid approach for Android malware detection using the permissions and API calls in the Android application. The proposed approach consists of two steps. The first step involves finding the most significant permissions and Application Programming Interfaces (API) calls that leads to efficient discrimination between the malware and good ware applications. For this purpose, two features selection algorithms, Information Gain (IG) and Pearson CorrCoef (PC) are employed to rank the individual permissions and API’s calls based on their importance for classification. In the second step, the proposed new hybrid approach for Android malware detection based on the combination of the Adaptive neural fuzzy Inference System (ANFIS) with the Particle Swarm Optimization (PSO), is employed to differentiate between the malware and goodware Android applications (apps). The PSO is intelligently utilized to optimize the ANFIS parameters by tuning its membership functions to generate reliable and more precise fuzzy rules for Android apps classification. Using a dataset consists of 250 goodware and 250 malware apps collected from different recourse, the conducted experiments show that the suggested method for Android malware detection is effective and achieved an accuracy of 89%.

Author 1: Altyeb Altaher
Author 2: Omar Mohammed Barukab

Keywords: Android malware detection; features selection; fuzzy inference system; particle swarm optimization

Download PDF

Paper 9: Insight to Research Progress on Secure Routing in Wireless Ad hoc Network

Abstract: Wireless Ad hoc Network offers a cost effective communication to the users free from any infrastructural dependencies. It is characterized by decentralized architecture, mobile nodes, dynamic topology, etc. that makes the network formation typically challenging. In the past decade, there has been a series of research work towards enhancing its routing performance by addressing various significant problems. This manuscript mainly orients around the progress being made in the line of secure routing protocol, which is still a bigger issue. The paper discusses different approaches undertaken by existing literature towards discrete security problem and explores the effective level of security. The study outcome of the paper finds that progress towards wireless ad hoc network is still very less and there is a need to come up with robust security framework. The paper also discusses the research gap being identified from the existing techniques and finally discusses the future work direction to address the certain unsolved problem.

Author 1: Jyoti Neeli
Author 2: N K Cauvery

Keywords: Attacks; confidentiality; secured routing; integrity; mobile ad hoc network; wireless ad hoc network

Download PDF

Paper 10: Environments and System Types of Virtual Reality Technology in STEM: a Survey

Abstract: Virtual Reality (VR) technology has been used widely today in Science, Technology, Engineering and Mathematics (STEM) fields. The VR is emerging computer interface distinguished by high degrees of immersion, trustworthy, and interaction. The goal of VR is making the user believe, as much as possible, that he is within the computer-generated environment. The VR has become one of the important technologies to be discussed regarding its applications, usage, and its different types that can achieve huge benefits in the real world. This survey paper introduces detail information about VR systems and requirements to build correct VR environment. Moreover, this work presents a comparison between system types of VR. Then, it presents the tools and software used for building VR environments. After that, we epitomize a road of the map for selecting appropriate VR system according to the field of applications. Finally, we introduce the conclusion and future predictions to develop the VR systems.

Author 1: Asmaa Saeed Alqahtani
Author 2: Lamya Foaud Daghestani
Author 3: Lamiaa Fattouh Ibrahim

Keywords: Virtual reality; 3D graphics; immersion; 3D images; navigation; multimedia

Download PDF

Paper 11: Phishing Websites Classification using Hybrid SVM and KNN Approach

Abstract: Phishing is a potential web threat that includes mimicking official websites to trick users by stealing their important information such as username and password related to financial systems. The attackers use social engineering techniques like email, SMS and malware to fraud the users. Due to the potential financial losses caused by phishing, it is essential to find effective approaches for phishing websites detection. This paper proposes a hybrid approach for classifying the websites as Phishing, Legitimate or Suspicious websites, the proposed approach intelligently combines the K-nearest neighbors (KNN) algorithm with the Support Vector Machine algorithm (SVM) in two stages. Firstly, the K-NN was utilized as and robust to noisy data and effective classifier. Secondly, the SVM is employed as powerful classifier. The proposed approach integrates the simplicity of KNN with the effectiveness of SVM. The experimental results show that the proposed hybrid approach achieved the highest accuracy of 90.04% when compared with other approaches.

Author 1: Altyeb Altaher

Keywords: information security; phishing websites; Support vector machine; K-nearest neighbors

Download PDF

Paper 12: On Arabic Character Recognition Employing Hybrid Neural Network

Abstract: Arabic characters illustrate intricate, multidimensional and cursive visual information. Developing a machine learning system for Arabic character recognition is an exciting research. This paper addresses a neural computing concept for Arabic Optical Character Recognition (OCR). The method is based on local image sampling of each character to a selected feature matrix and feeding these matrices into a Bidirectional Associative Memory followed by Multilayer Perceptron (BAMMLP) with back propagation learning algorithm. The efficacy of the system has been justified over different test patterns of Arabic characters. Experimental results validate that the system is well efficient to recognize Arabic characters with overall more than 82% accuracy.

Author 1: Al-Amin Bhuiyan
Author 2: Fawaz Waselallah Alsaade

Keywords: Arabic characters; Arabic OCR; image histogram; BAMMLP; hybrid neural network

Download PDF

Paper 13: Cross-Layer-Based Adaptive Traffic Control Protocol for Bluetooth Wireless Networks

Abstract: Bluetooth technology is particularly designed for a wireless personal area network that is low cost and less energy consuming. Efficient transmission between different Bluetooth nodes depends on network formation. An inefficient Bluetooth topology may create a bottleneck and a delay in the network when data is routed. To overcome the congestion problem of Bluetooth networks, a Cross-layer-based Adaptive Traffic Control (CATC) protocol is proposed in this paper. The proposed protocol is working on backup device utilization and network restructuring. The proposed CATC is divided into two parts; the first part is based on intra-piconet traffic control, while the second part is based on inter-piconet traffic control. The proposed CATC protocol controls the traffic load on the master node by network restructuring and the traffic load of the bridge node by activating a Fall-Back Bridge (FBB). During the piconet restructuring, the CATC performs the Piconet Formation within Piconet (PFP) and Scatternet Formation within Piconet (SFP). The PFP reconstructs a new piconet in the same piconet for the devices which are directly within the radio range of each other. The SFP reconstructs the scatternet within the same piconet if the nodes are not within the radio range. Simulation results are proof that the proposed CATC improves the overall performance and reduces control overhead in a Bluetooth network.

Author 1: Sabeen Tahir
Author 2: Sheikh Tahir Bakhsh

Keywords: Bluetooth; scatternet; multi-layer; resolving bottleneck; reducing control overhead component

Download PDF

Paper 14: A Comparative Study on the Effect of Multiple Inheritance Mechanism in Java, C++, and Python on Complexity and Reusability of Code

Abstract: Two of the fundamental uses of generalization in object-oriented software development are the reusability of code and better structuring of the description of objects. Multiple inheritance is one of the important features of object-oriented methodologies which enables developers to combine concepts and increase the reusability of the resulting software. However, multiple inheritance is implemented differently in commonly used programming languages. In this paper, we use Chidamber and Kemerer (CK) metrics to study the complexity and reusability of multiple inheritance as implemented in Python, Java, and C++. The analysis of results suggests that out of the three languages investigated Python and C++ offer better reusability of software when using multiple inheritance, whereas Java has major deficiencies when implementing multiple inheritance resulting in poor structure of objects.

Author 1: Fawzi Albalooshi
Author 2: Amjad Mahmood

Keywords: Reusability; complexity; python; java; C++; CK metrics; multiple inheritance; software metrics

Download PDF

Paper 15: Fast Hybrid String Matching Algorithm based on the Quick-Skip and Tuned Boyer-Moore Algorithms

Abstract: The string matching problem is considered as one of the most interesting research areas in the computer science field because it can be applied in many essential different applications such as intrusion detection, search analysis, editors, internet search engines, information retrieval and computational biology. During the matching process two main factors are used to evaluate the performance of the string matching algorithm which are the total number of character comparisons and the total number of attempts. This study aims to produce an efficient hybrid exact string matching algorithm called Sinan Sameer Tuned Boyer Moore-Quick Skip Search (SSTBMQS) algorithm by blending the best features that were extracted from the two selected original algorithms which are Tuned Boyer-Moore and Quick-Skip Search. The SSTBMQS hybrid algorithm was tested on different benchmark datasets with different size and different pattern lengths. The sequential version of the proposed hybrid algorithm produces better results when compared with its original algorithms (TBM and Quick-Skip Search) and when compared with Maximum-Shift hybrid algorithm which is considered as one of the most recent hybrid algorithm. The proposed hybrid algorithm has less number of attempts and less number of character comparisons.

Author 1: Sinan Sameer Mahmood Al-Dabbagh
Author 2: Nuraini bint Abdul Rashid
Author 3: Mustafa Abdul Sahib Naser
Author 4: Nawaf Hazim Barnouti

Keywords: Hybrid algorithm; string matching algorithm; Tuned Boyer-Moore algorithm; quick-skip search algorithm; Sinan Sameer Tuned Boyer Moore-Quick Skip Search (SSTBMQS)

Download PDF

Paper 16: Multi-Criteria Wind Turbine Selection using Weighted Sum Approach

Abstract: Wind energy is becoming a potential source for renewable and clean energy. An important factor that contributes to efficient generation of wind power is the use of appropriate wind turbine. However, the task of selecting an appropriate, site-specific turbine is a complex problem. The complexity is due to the presence of several conflicting decision criteria in the decision process. Therefore, a decision is sought such that best tradeoff is achieved between the selection criteria. With the inherent complexities encompassing the decision-making process, this study develops a multi-criteria decision model for turbine selection based on the concepts of weighted sum approach. Results indicate that the proposed methodology for finding the most suitable turbine from a pool of 18 turbines is effective.

Author 1: Shafiqur Rehman
Author 2: Salman A. Khan

Keywords: Wind turbine; renewable energy; weighted sum method; multi-criteria decision-making

Download PDF

Paper 17: An Adaptive CAD System to Detect Microcalcification in Compressed Mammogram Images

Abstract: Microcalcifications (MC) in mammogram images are an early sign for breast cancer and their early detection is vital to improve its prognosis. Since MC appears as small dot in the mammogram image with size less than 1 mm and maybe easily overlooked by the radiologist, the Computer Aided Diagnosis (CAD) approach can assist the radiologist to improve their diagnostic accuracy. On the other hand, the mammogram images are a high resolution image with large image size which makes difficult the image transfer through the media. Therefore, in this paper, two image compressions techniques which are Discrete Cosine Transform (DCT) with entropy coding and Singular Value Decomposition (SVD) were investigated to reduce the mammogram image size. Then a novel adaptive CAD system is used to test the quality of the processed image based on true positive (TP) ratio and number of detected false positive (FP) regions in the mammogram image. The proposed adaptive CAD system used the visual appearance of MC in the mammogram to detect a potential MC regions. Then five texture features are implemented to reduce number of detected FP regions in the mammogram images. After implementing the adaptive CAD system on 100 mammogram images from USF and MIAS databases, it was found that the DCT can reduce the image size with a high quality since the ratio of TP is 87.6% with 11 FP/regions while in SVD the TP ratio is 79.1% with 26 FP/regions.

Author 1: Ayman AbuBaker

Keywords: Mammogram image; texture features; Discrete Cosine Transform (DCT); Singular Value Decomposition (SVD)

Download PDF

Paper 18: A Learner Model for Adaptable e-Learning

Abstract: The advancement in Information and Communication Technology (ICT) has provided new opportunities for teaching and learning in the form of e-learning. However, developing specialized contents, accommodating profiles of learners, e-learning pedagogy and available ICT infrastructure are the real challenges that need to be properly addressed for any successful e-learning system. The adaptability in an e-learning system can be used to address many of these challenges and issues. This paper proposes a learner model for adaptable e-learning model. The proposed model is based on the findings of a survey conducted to investigate the profiles and preferences of the local learners. The conceptual framework highlights the layered model of adaptable e-learning with the knowledge level of learners as the foundation layer. The foundation layer is derived from four components of adaptable e-learning, i.e., domain, program pedagogy, student model and technology interface. The learner algorithm retrieves the adaptable contents from the domain model by analyzing the learner information stored in the student model. The e-assessment is part of the program pedagogy and the assessment results are used to control the presentation and navigation of adaptable contents during the learning process. The model has been tested on a Computer Science course offered by Allama Iqbal Open University, Islamabad, Pakistan at Post Graduate Diploma level. The results show that the proposed adaptable e-learning model has significantly improved the knowledge level of the learners.

Author 1: Moiz Uddin Ahmed
Author 2: Nazir Ahmed Sangi
Author 3: Amjad Mahmood

Keywords: E-learning; adaptable; pedagogy; learning styles; e-assessment

Download PDF

Paper 19: Implementation of the RN Method on FPGA using Xilinx System Generator for Nonlinear System Regression

Abstract: In this paper, we propose a new approach aiming to ameliorate the performances of the regularization networks (RN) method and speed up its computation time. A considerable rapidity in totaling calculation time and high performance were accomplished through conveying difficult calculation charges to FPGA. Using Xilinx System Generator, a successful HW/SW Co-Design was constructed to accelerate the Gramian matrix computation. Experimental results involving two real data sets of Wiener-Hammerstein benchmark with process noise prove the efficiency of the approach. The implementation results demonstrate the efficiency of the heterogeneous architecture, presenting a speed-up factor of 40-50 orders of time, comparing to the CPU simulation.

Author 1: Intissar SAYEHI
Author 2: Okba TOUALI
Author 3: T. Saidani
Author 4: B. Bouallegue
Author 5: Mohsen MACHHOUT

Keywords: Machine learning; Reproducing Kernel Hilbert Spaces (RKHS); regularization networks; FPGA; HW/SW Co-simulation; systolic array architecture; PT326; Wiener-Hammerstein benchmark

Download PDF

Paper 20: A Parallel Genetic Algorithm for Maximum Flow Problem

Abstract: The maximum flow problem is a type of network optimization problem in the flow graph theory. Many important applications used the maximum flow problem and thus it has been studied by many researchers using different methods. Ford Fulkerson algorithm is the most popular algorithm that used to solve the maximum flow problem, but its complexity is high. In this paper, a parallel Genetic algorithm is applied to find a maximum flow in a weighted directed graph, by finding the objective function value for each augmenting path from the source to the sink simultaneously in the parallel steps in every iteration. The algorithm is implemented using Message Passing Interface (MPI) library, and results are conducted from a real distributed system IMAN1 supercomputer and were compared with a sequential version of Genetic-Maxflow. The simulation results show this parallel algorithm speedup the running time by achieving up to 50% parallel efficiency.

Author 1: Ola M. Surakhi
Author 2: Mohammad Qatawneh
Author 3: Hussein A. al Ofeishat

Keywords: Flow network; Ford Fulkerson algorithm; Genetic algorithm; Max Flow problem; MPI; multithread; supercomputer

Download PDF

Paper 21: Design of a High Speed Architecture of MQ-Coder for JPEG2000 on FPGA

Abstract: Digital imaging is omnipresent today. In many areas, digitized images replace their analog ancestors such as photographs or X-rays. The world of multimedia makes extensive use of image transfer and storage. The volume of these files is very high and the need to develop compression algorithms to reduce the size of these files has been felt. The JPEG committee has developed a new standard in image compression that now also has the status of Standard International: JPEG 2000. The main advantage of this new standard is its adaptability. Whatever the target application, whatever resources or available bandwidth, JPEG 2000 will adapt optimally. However, this flexibility has a price: the JPEG2000 perplexity is far superior to that of JPEG. This increased complexity can cause problems in applications with real-time constraints. In such cases, the use of a hardware implementation is necessary. In this context, the objective of this paper is the realization of a JPEG2000 encoder architecture satisfying real-time constraints. The proposed architecture will be implemented using programmable chips (FPGA) to ensure its effectiveness in real time. Optimization of renormalization module and byte-out module are described in this paper. Besides, the reduction in computational steps effectively minimizes the time delay and hence the high operating frequency. The design was implemented targeting a Xilinx Virtex 6 and an Altera Stratix FPGAs. Experimental results show that the proposed hardware architecture achieves real-time compression on video sequences on 35 fps at HDTV resolution.

Author 1: Taoufik Salem Saidani
Author 2: Hafedh Mahmoud Zayani

Keywords: MQ-Coder; High speed architecture; FPGA; JPEG2000; VHDL

Download PDF

Paper 22: One-Year Survival Prediction of Myocardial Infarction

Abstract: Myocardial infarction is still one of the leading causes of death and morbidity. The early prediction of such disease can prevent or reduce the development of it. Machine learning can be an efficient tool for predicting such diseases. Many people have suffered myocardial infarction in the past. Some of those have survived and others were dead after a period of time. A machine learning system can learn from the past data of those patients to be capable of predicting the one-year survival or death of patients with myocardial infarction. The survival at one year, death at one year, survival period, in addition to some clinical data of patients who have suffered myocardial infarction can be used to train an intelligent system to predict the one-year survival or death of current myocardial infarction patients. This paper introduces the use of two neural networks: Feedforward neural network that uses backpropagation learning algorithm (BPNN) and radial basis function networks (RBFN) that were trained on past data of patients who suffered myocardial infarction to be capable of generalizing the one-year survival or death of new patients. Experimentally, both networks were tested on 64 instances and showed a good generalization capability in predicting the correct diagnosis of the patients. However, the radial basis function network outperformed the backpropagation network in performing this prediction task.

Author 1: Abdulkader Helwan
Author 2: Dilber Uzun Ozsahin
Author 3: Rahib Abiyev
Author 4: John Bush

Keywords: Machine learning; myocardial infarction; backpropagation; radial basis function network; generalization; one-year survival prediction

Download PDF

Paper 23: A Collaborative Approach for Effective Requirement Elicitation in Oblivious Client Environment

Abstract: Acquiring the desired requirements from customer through requirement elicitation process is a big deal as entire project depends on this initial important activity. Poor requirement elicitation affects software quality. Various factors in the oblivious client environment like culture, linguistic, gender, nationality, race and politics; can affect the final deliverables. The interaction of complex values, attitudes, behavioral norms, beliefs and communication approaches by stakeholders with different values may lead towards misunderstanding and misinterpretation. This could lead towards failure or dissatisfaction of the final outcome which might cause loss to both parties. The project requires redesign or modification that could take extra time and cost to get the desired results. The oblivious nature of the client’s working environment is the major cause of poor requirement elicitation. This study focuses the issues in oblivious client environment where client is reluctant to provide desired information. This work proposes a novel requirement elicitation model for effective software development in oblivious client environment. The quality improvement results of software after using this model were verified using a qualitative survey.

Author 1: Muhammad Kashif Hanif
Author 2: Muhammad Ramzan Talib
Author 3: Nauman Ul Haq
Author 4: Arfan Mansoor
Author 5: Muhammad Umer Sarwar
Author 6: Nafees Ayub

Keywords: Requirement elicitation; oblivious client; software development; quality improvement; elicitation model

Download PDF

Paper 24: An Empirical Investigation into Blended Learning Effects on Tertiary Students and Students Perceptions on the Approach in Botswana

Abstract: The aim of the research was to conduct an empirical investigation into blended learning (BL) effects on tertiary students and students’ perceptions on the approach. This purpose was objective driven, following three objectives which were identified as 1) to assess the impact of BL on students enrolled in Tertiary institutions; 2) to assess tertiary students’ perceptions on the BL mode; and lastly 3) to establish the extent to which BL is accepted in a typical institution or university learning environment. An extensive literature review exercise was carried out which led to identification of two research questions to be used to meet the objectives and the purpose of the study. The research questions were 1) Does blended learning (BL) transform learners’ attitudes towards learning and improves results? 2) Does blended learning (BL) revolutionizes learners’ critical thinking levels and dispositions? Through the research the authors were specifically trying to elucidate and understand the BL mode and the effects it has on students, and their perceptions on it. The researcher followed the quantitative approach with the aid of using a questionnaire to further understand the effects of BL mode on students and their perceptions on the same, after reviewing several literatures. The findings indicated that the BL mode has a positive impact on the students, and students’ perceptions on the BL mode were also positive. These findings led to positive conclusions on the BL mode, substantiating the literature review findings on the same. In the light of the findings, and the objectives of the study, the authors concluded the study by proposing a framework which could be used for monitoring BL effects on tertiary students and students’ perceptions on the approach, as the results from the study indicated a positive outlook on the BL mode.

Author 1: Gofaone Kgosietsile Kebualemang
Author 2: Alpheus Wanano Mogwe

Keywords: Blended learning; blended learning effects; Students’ perceptions

Download PDF

Paper 25: MAC Protocol with Regression based Dynamic Duty Cycle Feature for Mission Critical Applications in WSN

Abstract: Wireless sensor networks demand energy efficient and application specific medium access control protocol when deployed in critical areas which are not frequently accessible. In such areas, the residual energy of nodes also become important along with the efficient data delivery. Many techniques using adaptive duty cycle approach are suggested by researchers to improve the data delivery performance of protocols. As low duty cycle introduces delay and high duty cycle causes energy losses in the network so duty cycle adaptation according to the distribution of nodes near event occurring area, traffic behaviour and remaining energy of the nodes may be done for energy saving as well as efficient data delivery performance. After analysing the S-MAC protocol performance in critical scenarios for the residual energy, throughput and packet delivery ratio, this paper suggests an improved mission critical MAC protocol called MC-MAC which uses novel regression based adaptive duty cycle approach. The duty cycle is given by the regression pattern of traffic while considering the performance of SMAC protocol for residual energy, throughput and packet delivery ratio. The analytical model of MC-MAC protocol is given accordingly and the performance analysis shows that the proposed MC-MAC protocol saves 40% energy of whole network and also 20% energy of the critical nodes in the mission critical path till base station, as compared to SMAC protocol. Very few improved MAC protocols provide mechanism to save the residual energy of critical nodes and hence to improve the lifetime of critical path. As MC-MAC protocol considers the throughput and packets delivery ratio (also along with residual energy) for calculating the regression formula for duty cycle based on traffic, so it is better than other critical MAC protocols which does trade-off of energy with throughput and packet delivery ratio.

Author 1: Gayatri Sakya
Author 2: Vidushi Sharma

Keywords: Regression based adaptive duty cycle approach; mission critical MAC; analytical model; performance analysis

Download PDF

Paper 26: An Internet-based Student Admission Screening System utilizing Data Mining

Abstract: This study aimed to propose an internet-based student admission screening system utilizing data mining in order for officers to reduce time to evaluate applicants as well as for the faculty to use less human resources on screening applicants that meets their proficiency and criteria of each department. Another benefit is that the system can help applicants efficiently choose a specialization that is suitable to their proficiency and capability. The system used a decision tree based classification method. Prior to system development, six models were created and tested to find the most efficient model which would later be applied for development of internet-based student admission screening system. The first three of six models employed a k-fold cross validation technique, while the remaining three models use a percentage split test technique. Experiment results revealed that the most efficient model was the data classification model that uses Percentage Split (80), which provided the precision of 87.90%, recall of 87.80%, F-measure of 87.60% and accuracy of 87.82%. To make the efficient student admission screening system, this experiment selected a data classification model that implements Percentage Split (80).

Author 1: Dolluck Phongphanich
Author 2: Wirat Choonui

Keywords: Classification method; data mining; decision tree; student admission screening

Download PDF

Paper 27: EVOTLBO: A TLBO based Method for Automatic Test Data Generation in EvoSuite

Abstract: Now-a-days software has a great impact on different aspects of human life. Software systems are responsible for safety of major critical tasks. To prevent catastrophic malfunctions, promising quality testing techniques should be used during software development. Software testing is an effective technique to catch defects, but it significantly increases the development cost. Therefore, automated testing is a major issue in software engineering. Search-Based Software Testing (SBST), specifically genetic algorithm, is the most popular technique in automated testing for achieving appropriate degree of software quality. In this paper TLBO, a swarm intelligence technique, is proposed for automatic test data generation as well as for evaluation of test results. The algorithm is implemented in EvoSuite, which is a reference tool for search-based software testing. Empirical studies have been carried out on the SF110 dataset which contains 110 java projects from the online code repository SourceForge and the results show that the TLBO provides competitive results in comparison with major genetic based methods.

Author 1: Mohammad Mehdi Dejam Shahabi
Author 2: S. Parsa Badiei
Author 3: S. Ehsan Beheshtian
Author 4: Reza Akbari
Author 5: S. Mohammad Reza Moosavi

Keywords: EvoSuite; TLBO; test data generation

Download PDF

Paper 28: An Investigation into the Suitability of k-Nearest Neighbour (k-NN) for Software Effort Estimation

Abstract: Software effort estimation is an increasingly significant field, due to the overwhelming role of software in today’s global market. Effort estimation involves forecasting the effort in person-months or hours required for developing a software. It is vital to ideal planning and paramount for controlling the software development process. However, there is presently no optimal method to accurately estimate the effort required to develop a software system. Inaccurate estimation leads to poor use of resources and perhaps failure of the software project. Effort estimation also plays a key role in deducing cost of a software project. Software cost estimation includes the generation of the effort estimates and project duration to predict cost required to develop software project. Thus, effort is very essential and there is always need to enhance the accuracy as much as possible. This study evaluates and compares the potential of Constructive COst MOdel II (COCOMO II) and k-Nearest Neighbor (k-NN) on software project dataset. By the analysis of results received from each method, it may be concluded that the proposed method k-NN yields better performance over the other technique utilized in this study.

Author 1: Razak Olu-Ajayi

Keywords: Software effort estimation; machine learning; k-Nearest Neighbor; Constructive COst MOdel II

Download PDF

Paper 29: An Adaptive Solution for Congestion Control in CoAP-based Group Communications

Abstract: The use of lightweight devices and constrained resources like Wireless Sensors Network (WSN) makes patterns traffic in the Internet of Things (IoT) different from the ones in conventional networks. One of the most emerging messaging protocols used to address the needs of these lightweight IoT nodes is Constrained Application Protocol (CoAP). CoAP presents a lot of advantages compared to other IoT application layer protocols; it ensures group communication via multicast communications between a server and multiple clients. Nevertheless, it doesn’t support a group communication from a client to multiple servers; it relies on multiple unicasts to do so. Regarding the fact that these constrained devices communicate via a large amount of messages and notifications, network congestion occurs. This paper proposes an adaptive congestion control algorithm designed for group communications using unicast between a client and multiple servers. Simulated results show that the proposed mechanism can appropriately achieve higher performances in terms of response time and packet loss.

Author 1: Fathia OUAKASSE
Author 2: Said RAKRAK

Keywords: Internet of Things (IoT); Constrained Application Protocol (CoAP); congestion control; group communication; multicast; unicast

Download PDF

Paper 30: An Analytical Model for Availability Evaluation of Cloud Service Provisioning System

Abstract: Cloud computing is a major technological trend that continues to evolve and flourish. With the advent of the cloud, high availability assurance of cloud service has become a critical issue for cloud service providers and customers. Several studies have considered the problem of cloud service availability modeling and analysis. However, the complexity of the cloud service provisioning system and the deep dependency stack of its layered architecture make it challenging to evaluate the availability of cloud services. In this paper, we propose a novel analytical model of cloud service provisioning systems availability. Further, we provide a detailed methodology for evaluating cloud service availability using series/parallel configurations and operational measures. The results of a case study using simulated cloud computing infrastructure illustrates the usability of the proposed model.

Author 1: Fatimah M. Alturkistani
Author 2: Saad S. Alaboodi

Keywords: Cloud computing; availability evaluation; series and parallel configuration; infrastructure as service

Download PDF

Paper 31: Network Packet Classification using Neural Network based on Training Function and Hidden Layer Neuron Number Variation

Abstract: Distributed denial of service (DDoS) is a structured network attack coming from various sources and fused to form a large packet stream. DDoS packet stream pattern behaves as normal packet stream pattern and very difficult to distinguish between DDoS and normal packet stream. Network packet classification is one of the network defense system in order to avoid DDoS attacks. Artificial Neural Network (ANN) can be used as an effective tool for network packet classification with the appropriate combination of numbers hidden layer neuron and training functions. This study found the best classification accuracy, 99.6% was given by ANN with hidden layer neuron numbers stated by half of input neuron numbers and twice of input neuron numbers but the number of hidden layers neuron by twice of input neuron numbers gives stable accuracy on all training function. ANN with Quasi-Newton training function doesn’t much affected by variation on hidden layer neuron numbers otherwise ANN with Scaled-Conjugate and Resilient-Propagation training function.

Author 1: Imam Riadi
Author 2: Arif Wirawan Muhammad
Author 3: Sunardi

Keywords: Classification; DDoS; neural; network; training; function; hidden; layer

Download PDF

Paper 32: Classifying Natural Language Text as Controlled and Uncontrolled for UML Diagrams

Abstract: Natural language text fall within the category of Controlled and Uncontrolled Natural Language. In this paper, an algorithm is presented to show that a given language text is controlled or uncontrolled. The parameters and framework is provided for UML diagram's repository. The parameter for controlled and uncontrolled languages is provided.

Author 1: Nakul Sharma
Author 2: Prasanth Yalla

Keywords: Natural Language Processing; UML Diagrams; Software Engineering

Download PDF

Paper 33: Automatic Fuzzy-based Hybrid Approach for Segmentation and Centerline Extraction of Main Coronary Arteries

Abstract: Coronary arteries segmentation and centerlines extraction is an important step in Coronary Artery Disease diagnosis. The main purpose of the fully automated presented approaches is helping the clinical non-invasive diagnosis process to be done in fast way with accurate result. In this paper, a hybrid scheme is proposed to segment the coronary arteries and to extract the centerlines from Computed Tomography Angiography volumes. The proposed automatic hybrid segmentation approach combines the Hough transform with a fuzzy-based region growing algorithm. First, a circular Hough transform is used to detect initially the aorta circle. Then, the well-known Fuzzy c-mean algorithm is employed to detect the seed points for the region growing algorithm resulting in 3D binary volume. Finally, the centerlines of the segmented arteries are extracted based on the segmented 3D binary volume using a skeletonization based method. Using a benchmark database provided by the Rotterdam Coronary Artery Algorithm Evaluation Framework, the proposed algorithm is tested and evaluated. A comparative study shows that the proposed hybrid scheme is able to achieve a higher accuracy, in comparison to the most related and recent published work, at reasonable computational cost.

Author 1: Khadega Khaled
Author 2: Mohamed A. Wahby Shalaby
Author 3: Khaled Mostafa El Sayed

Keywords: Automatic segmentation; coronary arteries; computed tomography angiography; centerlines extraction

Download PDF

Paper 34: An Improvement of Power Saving Class Type II Algorithm in WiMAX Sleep-mode

Abstract: Because of the fact that users can connect to a WiMAX (IEEE 802.16) network wirelessly with large-scale movement capability, it is inevitable that they cannot access electrical power sources at their desired time. As a result, a mechanism is needed to reduce power consumption; and therefore three power saving classes have been defined in WiMAX that each one is designed for a specific application. Although using a suitable power saving class (PSC) can reduce power consumption significantly, but lack of cross-layer coordination can reduce the efficiency of the power saving mechanism. Since real-time services which are related to power saving class type II (PSC II) have great importance and vast applications, an improved PSC II algorithm for WiMAX is proposed in this paper which not only guarantees WiMAX quality of service (QoS), but also makes the cross-layer coordination using a proactive buffer resulting in less power consumption. There is also a comparison made between the performance of the proposed algorithm and the predefined PSC II algorithm in WiMAX using computer simulations and it shows that using the proposed algorithm reduces power consumption by 60 percent, while WiMAX QoS is still guaranteed.

Author 1: Mehrdad Davoudi
Author 2: Mohammad-Ali Pourmina
Author 3: Ahmad Salahi

Keywords: WiMAX; IEEE 802.16; sleep mode; power saving class type II (PSC II); proactive buffer; quality of service (QoS)

Download PDF

Paper 35: ASCII based Sequential Multiple Pattern Matching Algorithm for High Level Cloning

Abstract: For high level of clones, the ongoing (present) research scenario for detecting clones is focusing on developing better algorithm. For this purpose, many algorithms have been proposed but still we require the methods that are more efficient and robust. Pattern matching is one of those favorable algorithms which is having that required potential in research of computer science. The structural clones of high level clones comprised lower level smaller clones with similar code fragments. In this repetitive occurrence of simple clones in a file may prompt higher file level clones. The proposed algorithm detects repetitive patterns in same file and clones at higher level of abstraction like file. In genetic area, there are a number of algorithms that are being used to identify DNA sequence. When compared with some of the existing algorithms the proposed algorithm for ASCII based sequential multiple pattern matching gives better performance. The present method increases overall performance and gradually decline the number of comparisons and character per comparison proportion by repudiating (avoid) unnecessary DNA comparisons.

Author 1: Manu Singh
Author 2: Vidushi Sharma

Keywords: Pattern matching; ASCII based; high level clone; file clone

Download PDF

Paper 36: Mobile Malware Classification via System Calls and Permission for GPS Exploitation

Abstract: Now-a-days smartphones have been used worldwide for an effective communication which makes our life easier. Unfortunately, currently most of the cyber threats such as identity theft and mobile malwares are targeting smartphone users and based on profit gain. They spread faster among the users especially via the Android smartphones. They exploit the smartphones through many ways such as through Global Positioning System (GPS), SMS, call log, audio or image. Therefore to detect the mobile malwares, this paper presents 32 patterns of permissions and system calls for GPS exploitation by using covering algorithm. The experiment was conducted in a controlled lab environment, by using static and dynamic analyses, with 5560 of Drebin malware datasets were used as the training dataset and 500 mobile apps from Google Play Store for testing. As a result, 21 out of 500 matched with these 32 patterns. These new patterns can be used as guidance for all researchers in the same field in identifying mobile malwares and can be used as the input for a formation of a new mobile malware detection model.

Author 1: Madihah Mohd Saudi
Author 2: Muhammad ‘Afif b. Husainiamer

Keywords: Mobile malware; Global Positioning System (GPS) exploitation; system call; permission; covering algorithm; static and dynamic analyses

Download PDF

Paper 37: A Review and Proof of Concept for Phishing Scam Detection and Response using Apoptosis

Abstract: Phishing scam is a well-known fraudulent activity in which victims are tricked to reveal their confidential information especially those related to financial information. There are various phishing schemes such as deceptive phishing, malware based phishing, DNS-based phishing and many more. Therefore in this paper, a systematic review analysis on existing works related with the phishing detection and response techniques together with apoptosis have been further investigated and evaluated. Furthermore, one case study to show the proof of concept how the phishing works is also discussed in this paper. This paper also discusses the challenges and the potential research for future work related with the integration of phishing detection model and response with apoptosis. This research paper also can be used as a reference and guidance for further study on phishing detection and response.

Author 1: A Yahaya Lawal Aliyu
Author 2: Madihah Mohd Saudi
Author 3: Ismail Abdullah

Keywords: Phishing; apoptosis; phishing detection; phishing response

Download PDF

Paper 38: Identifying Top-k Most Influential Nodes by using the Topological Diffusion Models in the Complex Networks

Abstract: Social networks are sub-set of complex networks, where users are defined as nodes, and the connections between users are edges. One of the important issues concerning social network analysis is identifying influential and penetrable nodes. Centrality is an important method among many others practiced for identification of influential nodes. Centrality criteria include degree centrality, betweenness centrality, closeness centrality, and Eigenvector centrality; all of which are used in identifying those influential nodes in weighted and weightless networks. TOPSIS is another basic and multi-criteria method which employs four criteria of centrality simultaneously to identify influential nodes; a fact that makes it more accurate than the above criteria. Another method used for identifying influential or top-k influential nodes in complex social networks is Heat Diffusion Kernel: As one of the Topological Diffusion Models; this model identifies nodes based on heat diffusion. In the present paper, to use the topological diffusion model, the social network graph is drawn up by the interactive and non-interactive activities; then, based on the diffusion, the dynamic equations of the graph are modeled. This was followed by using improved heat diffusion kernels to improve the accuracy of influential nodes identification. After several re-administrations of the topological diffusion models, those users who diffused more heat were chosen as the most influential nodes in the concerned social network. Finally, to evaluate the model, the current method was compared with Technique for Order Preferences by Similarity to Ideal Solution (TOPSIS).

Author 1: Maryam Paidar
Author 2: Sarkhosh Seddighi Chaharborj
Author 3: Ali Harounabadi

Keywords: Topological Diffusion; TOPSIS; Social Network; Complex Network; Interactive and Non-interactive Activities; Heat Diffusion Kernel

Download PDF

Paper 39: Grid Connected PV Plant based on Smart Grid Control and Monitoring

Abstract: Today, smart grid is considered as an attractive technology for monitoring and management of grid connected renewable energy plants due to its flexibility, network architecture and communication between providers and consumers. Smart grid has been deployed with renewable energy resources to be securely connected to the grid. Indeed, this technology aims to complement the demand for power generation and distributed storage. For this reason, a system powered by a photovoltaic (PV) has been chosen as an interesting solution due to its competitive cost and technical structure. To achieve this goal, a realistic smart grid configuration design is presented and evaluated using a radial infrastructure. Three-voltage models are used to demonstrate the grid design. Smart Meters are included via a SCADA to acquire and monitor the electrical signal characteristics during the day and to evaluate it through a statistical report. An operational data center (ODC) is used to collect the SMs statistical report and to review the demand-offer (DO) powers balance. The obtained results with Matlab/Simulink are validated by the famous ETAP software.

Author 1: Ibrahim Benabdallah
Author 2: Abeer Oun
Author 3: Adnène Cherif

Keywords: Distributed generation systems (DGS); smart grid (SG); smart meters (SM); photovoltaic systems (PVS)

Download PDF

Paper 40: Modeling and FPGA Implementation of a Thermal Peak Detection Unit for Complex System Design

Abstract: This paper, presents the modelization and the implementation of a thermal peak detection unit for complex system design. The modelization step starts with modeling the formula of the heat source using Simulink/Matlab tool, is the main objective of this work. Then the input temperature, the angles, the distance as well as certain frequencies, will be obtained from this formula using the GDS (gradient Direction Sensor) method based on RO (Ring Oscillator). Before the transition to the implementation in FPGA board, the use of VHDL code is necessary to describe the thermal peak detection unit, in order to verify and validate the whole module. This work offers a solution to thermally induced stress and local overheating of complex systems design which has been a major concern for the designers during the design of integrated circuit. In this paper a DE1 FPGA board cyclone V family 5CSEMA5F31C6 is used for the implementation.

Author 1: Aziz Oukaira
Author 2: Ouafaa Ettahri
Author 3: Ahmed Lakhssassi

Keywords: Thermal peak; complex system design; MATLAB; GDS; RO; FPGA; DE1

Download PDF

Paper 41: Quizrevision: A Mobile Application using the Google MIT App Inventor Language Compared with LMS

Abstract: At Qassim University, the Blackboard (https://lms.qu.edu.sa) Learning Management System (LMS) is used. An exploratory study was conducted on 105 randomly selected students attending Qassim University. Of these, 91 students (87%) affirmed that they did not use the LMS as a study aide. This paper describes the means by which the MIT App Inventor language could be used to develop a mobile application (app) for the Android operating system. The app, Quizrevision, enables students to review course knowledge and concepts. An online survey was used to investigate students’ perceptions and gather their feedback regarding the use of Quizrevision as a study aide, as compared to the LMS. An achievement test was used to examine the improvement of students’ scores. Data was collected from 114 students taking the Phonetics course (Arab 342) in the Arabic Language Department (ALD) of Qassim University; 63 of them (55.27%) were male, and 51 (44.73%) were female. Descriptive statistics, chi-square, and t-test were used to analyze the data. The results indicated that the Quizrevision app supported the students’ achievement. There was a positive attitude towards using the Quizrevision app, as well as higher engagement in using the app as compared with using the LMS. In addition, findings confirm that students prefer using m-learning apps rather than using LMSs for reviewing course concepts and knowledge. Furthermore, student scores improved after using the app.

Author 1: Mohamed A. Amasha
Author 2: Shaimaa Al-Omary

Keywords: Quizrevision; mobile application; LMS; e-learning; e-course; MIT APP Inventor; Android devices

Download PDF

Paper 42: A Systematic Literature Review to Determine the Web Accessibility Issues in Saudi Arabian University and Government Websites for Disable People

Abstract: Kingdom of Saudi Arabia has shown great commitment and support in past 10 years towards the higher education and transformation of manual governmental services to online through web. As a result number of university and e-government websites increased but without following the proper accessibility guidelines. Due to this many disable peoples may not be fully benefited the contents available on university and government websites. According to the World Health Organization (WHO) report, there are more than one billion people all over the world facing different kind of disabilities. Almost 720,000 Saudi nationals are disable which is about 4% of total Saudi population. The objective of this study is to review the existing literature to identify the web accessibility issues in Saudi Arabian university and government websites through a systematic literature review. Several scholarly databases were searched for the research studies published on web accessibility evaluation globally and in Saudi Arabia from 2009 to 2017. Only 15 (6 based on Saudi Arabia and 9 global) research articles out of 123 articles fulfilled the selection criteria. Literature review reveals that web accessibility is a global issue and many countries around the world including Saudi Arabia are facing web accessibility challenges. Moreover web accessibility guidelines WCAG 1.0 and WCAG 2.0 are not addressing many problems which are faced by user and some guidelines were not effective to avoid the user problems. However, findings in this study open a new dimension in web accessibility to do extensive research to determine the web accessibility criterions/standards in context of Saudi Arabia.

Author 1: Muhammad Akram
Author 2: Rosnafisah Bt Sulaiman

Keywords: Web accessibility; disability; e-government; web contents accessibility guidelines; WCAG 1.0; WCAG 2.0; accessibility evaluation

Download PDF

Paper 43: Secure Encryption for Wireless Multimedia Sensors Network

Abstract: The security in wireless multimedia sensor network is a crucial challenge engendered by environmental, material constraint requirements and the energy consumption. Standard encryption algorithms do not agree with the real-time applications on this network. One of the solutions to the challenges mentioned above is to maintain the safety and reduce the energy consumption. In this article, a new approach with a high-energy efficiency, a high level of security and a big robustness against the statistics and differential attacks is presented in this paper. The new approach called Shift-AES admits simple operations such as the substitution, the transposition by or-exclusive and shift. It keeps the principle of Shannon for the diffusion and the confusion. Some criteria to measure the performances of the approach such as the visual inspection, histogram analysis, entropy images, the correlation of two adjacent pixels, the analysis against differential attacks, and the analysis of performance at the level run-time and throughput are successfully realized. The experimental evaluation of the proposed algorithm Shift-AES proves that the algorithm is ideal for wireless multimedia sensor network. With a satisfactory level of security, best term timeliness and throughput of transmission, compared with the AES standard encryption algorithm, this approach allows us to increase the lifetime of the network.

Author 1: Amina Msolli
Author 2: Haythem Ameur
Author 3: Abdelhamid Helali
Author 4: Hassen Maaref

Keywords: Wireless Multimedia Sensor Network (WMSN); image encryption; Shift-AES; security

Download PDF

Paper 44: Towards Efficient Graph Traversal using a Multi-GPU Cluster

Abstract: Graph processing has always been a challenge, as there are inherent complexities in it. These include scalability to larger data sets and clusters, dependencies between vertices in the graph, irregular memory accesses during processing and traversals, minimal locality of reference, etc. In literature, there are several implementations for parallel graph processing on single GPU systems but only few for single and multi-node multi-GPU systems. In this paper, the prospects of improvement in large graph traversals by utilizing multi-GPU cluster for Breadth First Search algorithm has been studied. In this regard, a DiGPU, a CUDA-based implementation for graph traversal in shared memory multi-GPU and distributed memory multi-GPU systems has been proposed. In this work, an open source software module has also been developed and verified through set of experiments. Further, evaluations have been demonstrated on local cluster as well as on CDER cluster. Finally, experimental analysis has been performed on several graph data sets using different system configurations to study the impact of load distribution with respect to GPU specification on performance of our implementation.

Author 1: Hina Hameed
Author 2: Nouman M Durrani
Author 3: Sehrish Hina
Author 4: Jawwad A. Shamsi

Keywords: Graph processing; GPU cluster; distributed graph traversal API; CUDA; BFS; MPI

Download PDF

Paper 45: A Japanese Tourism Recommender System with Automatic Generation of Seasonal Feature Vectors

Abstract: Tourism recommender systems have been widely used in our daily life to recommend tourist spots to users meeting their preference. In this paper, we propose a contentbased tourism recommender system considering travel season of users. In order to characterize seasonal variable features of spots,the proposed system generates seasonal feature vectors in three steps: 1) to identify the vocabulary concerned through Wikipedia; 2) to identify the trend over all spots through Twitter for each season; and 3) to highlight the weight of words contained in each identified trend. In the decision of recommendation, it does not only match the user profile with features of spots but also takes user’s travel season into account. The effectiveness of the proposed system is evaluated by a series of experiments, i.e.computer simulation and questionnaire evaluation. The result indicates that: 1) those vectors certainly reflect the similarity of spots for designated time period, and 2) with using such vectors of spots, the system successfully realized a tourism seasonal recommendation.

Author 1: Guan-Shen Fang
Author 2: Sayaka Kamei
Author 3: Satoshi Fujita

Keywords: Tourism recommender system; seasonal feature vector; Wikipedia; Twitter

Download PDF

Paper 46: A Survey of Big Data Analytics in Healthcare

Abstract: Debate on big data analytics has earned a remarkable interest in industry as well as academia due to knowledge, information and wisdom extraction from big data. Big data and cloud computing are two most important trends that are defining the new emerging analytical tools. Big data has various applications in different fields like traffic control, weather forecasting, fraud detection, security, education enhancement and health care. Extraction of knowledge from large amount of data has become a challenging task. Similarly, big data analysis can be used for effective decision making in healthcare by some modification in existing machine learning algorithms. In this paper, rawbacks of existing machine learning algorithms are summarized for big data analysis in healthcare.

Author 1: Muhammad Umer Sarwar
Author 2: Muhammad Kashif Hanif
Author 3: Ramzan Talib
Author 4: Awais Mobeen
Author 5: Muhammad Aslam

Keywords: Big data; Analytics; Healtcare; Analytical tools; Machine learning

Download PDF

Paper 47: An Image Encryption Technique based on Chaotic S-Box and Arnold Transform

Abstract: In recent years, chaos has been extensively used in cryptographic systems. In this regard, one dimensional chaotic maps gained increased attention because of their intrinsic simplicity and ease in application. Many image encryption algorithms that are based on chaotic substitution boxes (S-boxes) have been studied in the last few years but some of them appear to be vulnerable to robustness. We, in this paper, propose an efficient scheme for image encryption that utilizes a composition of chaotic substitution based on tent map with the scrambling effect of the Arnold transform. The proposed construction algorithm for substitution box is, on one hand, straightforward and saves omputational labour, while on the other, it provides highly efficient performance outcomes. The understudy scheme uses an S-box, that is based on 1-D chaotic tent map. We partially encrypt the image using this S-box and then apply certain number of iterations of the Arnold transform to attain the fully encrypted image. For decryption we apply the reverse process. The strength of the proposed method is determined through the most significant techniques used for the statistical analysis and it is proved that the anticipated algorithm shows coherent results.

Author 1: Shabieh Farwa
Author 2: Tariq Shah
Author 3: Nazeer Muhammad
Author 4: Nargis Bibi
Author 5: Adnan Jahangir
Author 6: Sidra Arshad

Keywords: Chaos; image encryption; tent map; S-box; Arnold transform; statistical analyses

Download PDF

Paper 48: Fault-Tolerant Model Predictive Control for a Z(TN)-Observable Linear Switching Systems

Abstract: This work considers the control and the state observation of a linear switched systems with actuators faults. A particular problem is studied: the occurrence of non-observable subsystem in the switching sequence. Hence, the accuracy of the state estimations will decrease affecting the observer-based fault detection algorithms. In this paper, we propose a solution based on a constrained switching control in a predictive scheme. An extension to fault-tolerant control is derived, using several hybrid observers for estimation and fault detection and a reconfigurable finite control set model-predictive controller. The paper includes experimental results applied to a multicellular converter to demonstrate the efficiency of the method.

Author 1: Abir SMATI
Author 2: Wassila CHAGRA
Author 3: Moufida KSSOURI

Keywords: Switching systems; Z(TN)-observability; finite control set predictive control; fault tolerant control; multicellular converter

Download PDF

Paper 49: Impact of Distributed Generation on the Reliability of Local Distribution System

Abstract: With the growth of distributed generation (DG) and renewable energy resources the power sector is becoming more sophisticated, distributed generation technologies with its diverse impacts on power system is becoming attractive area for researchers. Reliability is one of the vital area in electric power system which defines continuous supply of power and customer satisfaction. Around the world many power generation and distribution companies conduct reliability tests to ensure continues supply of power to its customers. Uttermost reliability problems in power system are due to distribution network. In this research reliability analysis of distribution system is done. The interruption frequency and interruption duration increases as the distance of load points increase from feeder. Injection of single DG unit into distribution system increase reliability of distribution system, injecting multiple DG at different locations and near to load points in distribution network further increases reliability of distribution system, while introducing multiple DG at single location improves reliability of distribution system. The reliability of distribution system remains unchanged while varying the size of DG unit. Different reliability tests were done to find the optimum location to plant DG in distribution system. For these analyses distribution feeder bus 2 of RBTS is selected as case study. The distribution feeder is modeled in ETAP, ETAP is software tool used for electrical power system modeling, analysis, design, optimization, operation, control, and automation. These results can be helpful for power utilities and power producer companies to conduct reliability tests and to properly utilize the distributed generation sources for future expansion of power systems.

Author 1: Sanaullah Ahmad
Author 2: Sana Sardar
Author 3: Azzam Ul Asar
Author 4: Babar Noor

Keywords: Electric power system reliability; distributed generation; reliability assessment

Download PDF

Paper 50: Security Issues in the Internet of Things (IoT): A Comprehensive Study

Abstract: Wireless communication networks are highly prone to security threats. The major applications of wireless communication networks are in military, business, healthcare, retail, and transportations. These systems use wired, cellular, or adhoc networks. Wireless sensor networks, actuator networks, and vehicular networks have received a great attention in society and industry. In recent years, the Internet of Things (IoT) has received considerable research attention. The IoT is considered as future of the internet. In future, IoT will play a vital role and will change our living styles, standards, as well as business models. The usage of IoT in different applications is expected to rise rapidly in the coming years. The IoT allows billions of devices, peoples, and services to connect with others and exchange information. Due to the increased usage of IoT devices, the IoT networks are prone to various security attacks. The deployment of efficient security and privacy protocols in IoT networks is extremely needed to ensure confidentiality, authentication, access control, and integrity, among others. In this paper, an extensive comprehensive study on security and privacy issues in IoT networks is provided.

Author 1: Mirza Abdur Razzaq
Author 2: Sajid Habib Gill
Author 3: Muhammad Ali Qureshi
Author 4: Saleem Ullah

Keywords: Internet of Things (IoT); security issues in IoT; security; privacy

Download PDF

Paper 51: A Two-Stage Classifier Approach using RepTree Algorithm for Network Intrusion Detection

Abstract: In this paper, we present a two-stage classifier based on RepTree algorithm and protocols subset for network intrusion detection system. To evaluate the performance of our approach, we used the UNSW-NB15 data set and the NSL-KDD data set. In first phase our approach divides the incoming network traffics into three type of protocols TCP, UDP or Other, then classifies into normal or anomaly. In second stage a multiclass algorithm classify the anomaly detected in the first phase to identify the attacks class in order to choose the appropriate intervention. The number of features is reduced from over 40 to less than 20 features, according to the protocol, using feature selection techniques. The detection accuracy of 88,95% and 89,85% was achieved on the complete UNSW-NB15 and NSL-KDD data set, respectively using individual classifier, results are better as compared to the recent work on these data sets.

Author 1: Mustapha Belouch
Author 2: Salah El Hadaj
Author 3: Mohamed Idhammad

Keywords: Intrusion detection; REPTree; UNSW-NB15; NSLKDD

Download PDF

Paper 52: Comprehensive Understanding of Intelligent User Interfaces

Abstract: This paper represents basic discussion for one of the latest advances in the technology, known as Intelligent User Interface (IIUI) which is a combination of two major fields of computer science, namely, HCI & Artificial Intelligence. The paper first discusses basic definitions, motivation to this research and UIMS (User Interface Management System) along with example of user interface models to understand user interfaces in detail. The four major classes (with their examples) of these interfaces have been taken as a method for this study. The overall discussion summarizes some basic principles used to create these interfaces, components that are important in the generation of IUIs and decision making process in IUI for the reader to understand working of IIUIs.

Author 1: Sarang Shaikh
Author 2: M. Ajmal Sawand
Author 3: Najeed Ahmed Khan
Author 4: Farhan Badar Solangi

Keywords: Intelligent user interfaces; HCI; artificial intelligence; IIUI

Download PDF

Paper 53: A Review of Bluetooth based Scatternet for Mobile Ad hoc Networks

Abstract: Bluetooth based networking is an emerging and promising technology that takes small area networking to an enhanced and better level of communication. Bluetooth specification supports piconet formation. However, scatternet formation remains open. The primary challenge faced for scatternet formation is the interconnection of piconets. This paper presents a review of the proposed approaches and the problems confronted for establishing scatternet for ad hoc networks specifically MANET. In this work, a comparison of the Blue layer algorithm with MMPI interface based algorithm on Bluetooth scatternet formation. The enhancement in the developed MMPI framework makes it a good option for scatternet applications.

Author 1: Khizra Asaf
Author 2: Muhammad Umer Sarwar
Author 3: Muhammad Kashif Hanif
Author 4: Ramzan Talib
Author 5: Irfan Khan

Keywords: Bluetooth; ad hoc network; piconet; scatternet; MANET

Download PDF

Paper 54: Data Provenance for Cloud Computing using Watermark

Abstract: The term data is new oil which has become a proverb due to large amount of data generation from various sources. Processing and storing such tremendous amount of data is beyond the capabilities of traditional computing system. Cloud computing preferably considered next-generation architecture due to dynamic resource pools, low cost, reliability, virtualization, and high availability. In cloud computing, one important issue is to track and record the origin of data objects which is known as data provenance. Major challenges to provenance management in distributed environment are privacy and security. This paper presents data provenance management for cloud computing using watermarking technique. The experiment is performed by using both visible and hidden watermarks on shared data objects stored in cloud computing environment. The experimental results demonstrate the efficiency and reliability of proposed technique.

Author 1: Muhammad Umer Sarwar
Author 2: Muhammad Kashif Hanif
Author 3: Ramzan Talib
Author 4: Bilal Sarwar
Author 5: Waqar Hussain

Keywords: Cloud computing; data provenance; watermark; security; visible watermark; invisible watermark

Download PDF

Paper 55: On FPGA Implementation of a Continuous-Discrete Time Observer for Sensorless Induction Machine using Simulink HDL Coder

Abstract: This paper deals with the design of a continuousdiscrete time high gain observer (CDHGO) for sensorless control of an induction machine (IM). Only two weakly sampled stator current measurements are used to achieve a real-time estimation of the rotor flux, the mechanical speed and the load torque. The feasibility of implementing our algorithm on the FPGA target is discussed in term of the best word format choice for internal variables and in term of making up for problems attached with complex bloc diagram VHDL conversion. Before an eventual implementation on the Virtex FPGA board, a validation of the proposed observer is performed through the ModelSim software where we show that the waveforms of estimates bring closer the true ones.

Author 1: Moez Besbes
Author 2: Salim Hadj Sad
Author 3: Faouzi M’Sahli
Author 4: Monther Farza

Keywords: Hig gain observer; FPGA; HDL coder

Download PDF

Paper 56: A Feature Selection Algorithm based on Mutual Information using Local Non-uniformity Correction Estimator

Abstract: Feature subset selection is an effective approach used to select a compact subset of features from the original set. This approach is used to remove irrelevant and redundant features from datasets. In this paper, a novel algorithm is proposed to select the best subset of features based on mutual information and local non-uniformity correction estimator. The proposed algorithm consists of three phases: in the first phase, a ranking function is used to measure the dependency and relevance among features. In the second phase, candidates with higher dependency and minimum redundancy are selected to participate in the optimal subset. In the last phase, the produced subset is refined using forward and backward wrapper filter to ensure its effectiveness. A UCI machine repository datasets are used for validation and testing. The performance of the proposed algorithm has been found very significant in terms of classification accuracy and time complexity.

Author 1: Ahmed I. Sharaf
Author 2: Mohamed Abu El-Soud
Author 3: Ibrahim El-Henawy

Keywords: Feature subset selection; irrelevant features; mutual information; local non-uniformity correction

Download PDF

Paper 57: Sentiment Analysis Using Deep Learning Techniques: A Review

Abstract: The World Wide Web such as social networks, forums, review sites and blogs generate enormous heaps of data in the form of users views, emotions, opinions and arguments about different social events, products, brands, and politics. Sentiments of users that are expressed on the web has great influence on the readers, product vendors and politicians. The unstructured form of data from the social media is needed to be analyzed and well-structured and for this purpose, sentiment analysis has recognized significant attention. Sentiment analysis is referred as text organization that is used to classify the expressed mind-set or feelings in different manners such as negative, positive, favorable, unfavorable, thumbs up, thumbs down, etc. The challenge for sentiment analysis is lack of sufficient labeled data in the field of Natural Language Processing (NLP). And to solve this issue, the sentiment analysis and deep learning techniques have been merged because deep learning models are effective due to their automatic learning capability. This Review Paper highlights latest studies regarding the implementation of deep learning models such as deep neural networks, convolutional neural networks and many more for solving different problems of sentiment analysis such as sentiment classification, cross lingual problems, textual and visual analysis and product review analysis, etc.

Author 1: Qurat Tul Ain
Author 2: Mubashir Ali
Author 3: Amna Riaz
Author 4: Amna Noureen
Author 5: Muhammad Kamran
Author 6: Babar Hayat
Author 7: A. Rehman

Keywords: Sentiment analysis; recurrent neural network; deep neural network; convolutional neural network; recursive neural network; deep belief network

Download PDF

Paper 58: Cloud Computing: Pricing Model

Abstract: Cloud computing is the elemental aspect for online security of computing resources. It helps on-demand dividing of resources and cost between a major number of end users. It provides end users to process, manage, and store data so fast with reasonable prices. This is significant to know the causes of embarrassment between clients in relation to cloud computing services, particularly when it comes to a new price method. The price presents an important element, an indicator that often shows the quality of services, but on the other hand, the salesman with its offer on services has an impact directly on clients’ decisions to use them. For both providers and users of cloud services, identifying the common factors in cloud services pricing is critical. In this paper will be shown various pricing model for cloud computing, and how they affect in different resources, their comparison, also the pricing model for two platforms: 1) Google Cloud Computing; and 2) Amazon Web Services.

Author 1: Aferdita Ibrahimi

Keywords: Component; Cloud Computing pricing model; comparison of pricing model; Google Cloud platform and Amazon Web Service pricing model

Download PDF

Paper 59: Cryptography: A Comparative Analysis for Modern Techniques

Abstract: Cryptography plays a vital role for ensuring secure communication between multiple entities. In many contemporary studies, researchers contributed towards identifying best cryptography mechanisms in terms of their performance results. Selection of cryptographic technique according to a particular context is a big question; to answer this question, many existing studies have claimed that technique selection is purely dependent on desired quality attributes such as efficiency and security. It has been identified that existing reviews are either focused only towards symmetric or asymmetric encryption types. Another limitation is found that a criterion for performance comparisons only covers common parameters. In this paper, we have evaluated the performance of different symmetric and asymmetric algorithms by covering multiple parameters such as encryption/decryption time, key generation time and file size. For evaluation purpose, we have performed simulations in a sample context in which multiple cryptography algorithms have been compared. Simulation results are visualized in a way that clearly depicts which algorithm is most suitable while achieving a particular quality attribute.

Author 1: Faiqa Maqsood
Author 2: Muhammad Ahmed
Author 3: Muhammad Mumtaz Ali
Author 4: Munam Ali Shah

Keywords: Cryptography; symmetric; asymmetric; encryption; decryption

Download PDF

Paper 60: Facial Expression Recognition using Hybrid Texture Features based Ensemble Classifier

Abstract: Communication is fundamental to humans. In the literature, it has been shown through many scientific research studies that human communication ranges from 54 to 94 percent is non-verbal. Facial expressions are the most of the important part of the non-verbal communication and it is the most promising way for people to communicate their feelings and emotions to represent their intentions. Pervasive computing and ambient intelligence is required to develop human-centered systems that actively react to complex human communication happening naturally. Therefore, Facial Expression Recognition (FER) system is required that can be used for such type of problem. In this paper, FER system has been proposed by using hybrid texture features to predict the expressions of human. Existing FER system has a problem that these systems show discrepancies in different cultures and ethnicities. Proposed systems also solve this type of problem by using hybrid texture features which are invariant to scale as well as rotate. For texture features, Gabor LBP (GLBP) features have been used to classify expressions by using Random Forest Classifier. Experimentation has been performed on different facial databases that demonstrate promising results.

Author 1: M. Arfan Jaffar

Keywords: Expression classification; ensemble; adaboost; facial; features

Download PDF

Paper 61: Web Service for Incremental and Automatic Data Warehouses Fragmentation

Abstract: The data warehouses (DW) are proposed to collect and store heterogeneous and bulky data. They represent a collection of thematic, integrated, non-volatile and histories data. They are fed from different data sources through transactional queries and offer analytical data through decisional queries. Generally, the decisional queries execution cost on large tables is very high. Reducing this cost becomes essential to enable decision-makers to interact in a reasonable time. In this context, DW administrators use different optimization techniques such as fragmentation, indexing, materialized views, and parallelism. On the other hand, the volume of data residing in the DW is constantly evolving. This can increase the complexity of frequent queries, which can degrade the performance of DW. The administrator always has to manually design a new fragmentation scheme from the new load of frequent queries. Having an automatic fragmentation tool of DW becomes important. The approach proposed in this paper aims at an incremental horizontal fragmentation technique of the DW through a web service. This technique is based on the updating of the queries load by adding the new frequent queries and eliminating the queries which do not remain frequent. The goal is to automate the implementation of the incremental fragmentation in order to optimize the new queries load. An experimental study on a real DW is carried out and comparative tests show the satisfaction of our approach.

Author 1: Ettaoufik Abdelaziz
Author 2: Mohammed Ouzzif

Keywords: Data warehouse; horizontal fragmentation; incremental fragmentation; frequent queries; web service

Download PDF

Paper 62: Cost Optimization of Replicas in Tree Network of Data Grid with QoS and Bandwidth Constraints

Abstract: Data Grid provides resources for data-intensive scientific applications that need to access a huge amount of data around the world. Since data grid is built on a wide-area network, its latency prohibits efficient access to data. This latency can be decreased by data replication in the vicinity of users who request data. Data replication can also improve data availability and decreases network bandwidth usage. It can be influenced by two imperative constraints: Quality of Service (QoS) that is locally owned by a user and bandwidth constraint that globally affects on link that might be shared by multiple users. Guaranteeing both constraints and also minimizing replication cost consisting communication and storage costs is a challenging task. To address this problem, the authors propose to use a dynamic algorithm called Optimal Placement of Replicas to minimize replication cost and coupled with meeting both mentioned constraints. It is also designed as heuristic algorithms that are competitive with optimal algorithm in performance metrics such as replication cost, network bandwidth usage and data availability. Extensive simulations show that the Optimal algorithm saves 10% cost compared to heuristic algorithms and provides local responsiveness for half of the user requests.

Author 1: Alireza Chamkoori
Author 2: Farnoosh Heidari
Author 3: Naser Parhizgar

Keywords: Hierarchical data grid; replication cost; replica optimal placement; communication cost; storage cost; cost minimization; QoS and bandwidth constraints

Download PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. Registered in England and Wales. Company Number 8933205. All rights reserved. thesai.org