The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Metadata Harvesting (OAI2)
  • Digital Archiving Policy
  • Promote your Publication

IJACSA

  • About the Journal
  • Call for Papers
  • Author Guidelines
  • Fees/ APC
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Guidelines
  • Fees
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Subscribe

IJACSA Volume 9 Issue 2

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Dynamic Time Warping and FFT: A Data Preprocessing Method for Electrical Load Forecasting

Abstract: For power suppliers, an important task is to accurately predict the short-term load. Thus many papers have introduced different kinds of artificial intelligent models to improve the prediction accuracy. In recent years, Random Forest Regression (RFR) and Support Vector Machine (SVM) are widely used for this purpose. However, they can not perform well when the sample data set is too noisy or with too few pattern feature. It is usually difficult to tell whether a regression algorithm can accurately predict the future load from the historical data set before trials. Here we demonstrate a method which estimates the similarity between time series by Dynamic Time Warping (DTW) combined with Fast Fourier Transform (FFT). Results show this is a simple and fast method to filter the raw large electrical load data set and improve the learning result before looping through all learning processes.

Author 1: Juan Huo

Keywords: Load forecast; Dynamic Time Warping (DTW); Fast Fourier Transform (FFT); random forest; Support Vector Machine (SVM)

Download PDF

Paper 2: A Serious Game for Improving Inferencing in the Presence of Foreign Language Unknown Words

Abstract: This study presents the design of a serious game for improving inferencing for foreign language students. The design of the game is grounded in research on reading theory, motivation and game design. The game contains trial-and-error activities in which students create conversations and then watch these conversations play out. Making mistakes results in students receiving feedback and being requested to try again. An evaluation of the system was also conducted, in which participants used both simple text and the game. Post-test scores for using the game were significantly higher than scores when reading the text. User reception to the system was also positive. These results suggest that serious games can be effective for enhancing inferencing when foreign language students face unknown words. Implications for reading comprehension and for incidental vocabulary learning are also discussed.

Author 1: Pedro Gabriel Fonteles Furtado
Author 2: Tsukasa Hirashima
Author 3: Hayashi Yusuke

Keywords: Serious game; foreign language; contextual inference; unknown words

Download PDF

Paper 3: Real Time Computation for Robotic Arm Motion upon a Linear or Circular Trajectory

Abstract: The computation method proposed in this paper, named ADNIA (Analysis Differential Numeric Interpolate Algorithms), computes waypoints Cartesian coordinates for TCP (tool centre point) of a robotic arm, for a motion on an linear or circular imposed trajectories. At every sampling period of time, considering real-time software implementation of ADNIA, the location matrix of a robotic arm is computed. This computation method works with a well-defined value of motion speed; it results a maximum computation precision (for those motions).

Author 1: Liliana Marilena Matica
Author 2: Cornelia Gyorödi
Author 3: Helga Maria Silaghi
Author 4: Simona Veronica Abrudan Cacioara

Keywords: Waypoints; location matrix; position vector; orientation of a robotic arm; orientation versors; Analysis Difference Numeric Interpolate Algorithm (ADNIA); linear or circular ADNIA

Download PDF

Paper 4: A Game Theoretic Approach to Demand Side Management in Smart Grid with Multiple Energy Sources and Storage

Abstract: A smart grid is an advancement in electrical grid which includes a variety of operational and energy measures. To utilize energy distribution in an efficient manner, demand side management has become the fore-runner. In our research paper, we use game theory as a tool to model our system as a Stackelberg game. We make use of different energy sources like solar energy, energy from battery and energy from the provider to run appliances of a subscriber. We consider the scenario where the subscriber can give excess energy that is generated, back to the grid, thereby reducing the load on the grid during peak hours. We design a pricing scheme to calculate the utilities of the subscriber and the provider and show how our model maximizes the utility of the entire system, thereby showing the existence of a Nash equilibrium.

Author 1: Aritra Kumar Lahiri
Author 2: Ashwin Vasani
Author 3: Sumanth Kulkarni
Author 4: Nishant Rawat

Keywords: Demand side management; utility; smart grid; solar; battery; energy provider; fairness; proportional division; utilitarian division; Nash equilibrium

Download PDF

Paper 5: Design of Mobile Application for Travelers to Transport Baggage and Handle Check-in Process

Abstract: In this paper, an Android based application called ‘Baggage Check-in Handling System’ is developed for helping travelers/passengers transport their baggage to the airport and handle the check-in process. It is merging the idea of online baggage check-in, and tracking technology together. The application is stimulated from the rapid growth of on-demand ride services, such as UberX and Lyft and the wide spread adoption of smart-phones. The proposed system enables travelers to make an appointment before the flight’s take-off by requesting a driver to pick up the traveler’s baggage to transport to the airport. Then, travelers can track the driver’s location using Geographical Position System (GPS). Eventually after the check-in process, the driver will send a unique barcode provided for the baggage to travelers through the application. As a result, the traveler will have the choice of directly proceeding to the flight gate. The application is created for Android platform operating system, and developed in Java programming language using the Android software development kit (SDK). Additionally, data between database and server have been exchanged using phpMyAdmin. The application uses an authentication technique called Secure Hash Algorithm (SHA). This technique is designed to improve the scalability of authentication and reduce the overhead of access control.

Author 1: Sara Y. Ahmed

Keywords: Baggage handling system; tracking technology; baggage barcode; android platform; Android software development kit (SDK); phpMyAdmin, Secure Hash Algorithm (SHA)

Download PDF

Paper 6: Prioritizing Road Maintenance Activities using GIS Platform and Vb.net

Abstract: One of the most important factors for the sustainable development of any country is the quality and efficiency of its transportation system. The principled and accurate maintenance of roads, in addition to having a major impact on budget savings, improves the quality and service levels of the transportation system. For this reason, road management and maintenance are the main pillars of the transportation system in any country. Nowadays, due to the increased cost of maintaining roads and the lack of funding in this area, traditional ways of managing and maintaining roads, which are more based on the experience of the experts themselves, are no longer affordable. Hence, more recent, and more systematic methods have become more popular among relevant authorities. Afghanistan is a country facing problems such as budget deficits, lack of professional experts and advanced technology in road maintenance sector. This paper presents an example of using the GIS platform and vb.net to prioritize the road maintenance and rehabilitation activities based on identified criteria. A case study conducted in an academic environment and road maintenance and rehabilitation activities prioritized. The results show that the positive criterion has the greatest impact on the ranking of road maintenance activities. The characteristic of this process is to help the decision makers to plan road maintenance requirements to effectively and efficiently allocate funds for future planning.

Author 1: Fardeen Nodrat
Author 2: Dongshik Kang

Keywords: Road maintenance; prioritization; GIS; Vb.net; Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)

Download PDF

Paper 7: A Comparison of Usability Aspects between an Existing Hospital Website of Pakistan with a Template based on Usability Standards

Abstract: More people search internet for medical and health information. Due to increase in demand for online health services, hospitals need to equip their websites with usability standards. Hospital websites should be user centered in order to increase the usability. In the instant research study, an existing public sector hospital website is compared with a designed template for healthcare website. Template was designed keeping in view the user demands for hospital websites. Usability evaluation of both websites has been performed. Twenty-one users were involved in the research study. Three representative tasks were performed by each user on each website and a questionnaire was presented afterwards to collect user opinion about the websites under evaluation. Average score was calculated against both websites for each usability component. 75% users responded positively to designed website template comparing with existing hospital website which got 33% positive responses only. Hence, it was evident that the designed template had better response for usability. The findings of this study justify the literature that user centered design can significantly improve usability of websites. This study is a step towards research which intends to understand usability problems and propose design rules for designing hospital websites of Pakistan in line with usability standards.

Author 1: Muhammad Usman
Author 2: Mahmood Ashraf
Author 3: Muhammad Tahir

Keywords: Usability evaluation; healthcare website; hospital website evaluation

Download PDF

Paper 8: Detection of Climate Crashes using Fuzzy Neural Networks

Abstract: In this paper the detection of the climate crashes or failure that are associated with the use of climate models based on parameters induced from the climate simulation is considered. Detection and analysis of the crashes allows one to understand and improve the climate models. Fuzzy neural networks (FNN) based on Takagi-Sugeno-Kang (TSK) type fuzzy rule is presented to determine chances of failure of the climate models. For this purpose, the parameters characterising the climate crashes in the simulation are used. For comparative analysis, Support Vector Machine (SVM) is applied for simulation of the same problem. As a result of the comparison, the accuracy rates of 94.4% and 97.96% were obtained for SVM and FNN model correspondingly. The FNN model was discovered to be having better performance in modelling climate crashes.

Author 1: Rahib H. Abiyev
Author 2: Mohammed Azad Omar
Author 3: Boran Sekeroglu

Keywords: Climate crashes; fuzzy neural networks; parallel ocean program; SVM

Download PDF

Paper 9: Norm’s Trust Model to Evaluate Norms Benefit Awareness for Norm Adoption in an Open Agent Community

Abstract: In recent developments, norms have become important entities that are considered in agent-based systems’ designs. Norms are not only able to organize and coordinate the actions and behaviour of agents but have a direct impact on the achievement of agents’ goals. Consequently, an agent in a multi-agent system requires a mechanism that detects specific norms for adoption while rejecting others. The impact of such norms selection imposes risks on the agent’s goal and its plan ensuing from the probability of positive or negative outcomes when the agent adopts or reject some norms. In an earlier work, this predicament is resolved by enabling an agent to evaluate a norm’s benefits if it decides to adopt a particular norm. The evaluation mechanism entails a framework that analyzes a norm’s adoption ratio, yield, morality and trust, the unified values of which indicates the norm’s benefits. In this paper, the trust parameter of the mechanism is analyzed and a norm’s trust model is proposed and utilized in the evaluation of a norm’s benefits for subsequent adoption or rejection. Ultimately, the norm’s benefits are determined as a consequence of a favorable or unfavorable trust value as a significant parameter in a norm’s adoption or rejection.

Author 1: Al-Mutazbellah Khamees Itaiwi
Author 2: Mohd Sharifuddin Ahmad
Author 3: Alicia Y. C. Tang

Keywords: Norm’s benefits; norm’s trust; norm detection; normative multi-agent systems; intelligent software agent

Download PDF

Paper 10: Day-ahead Base, Intermediate, and Peak Load Forecasting using K-Means and Artificial Neural Networks

Abstract: Industries depend heavily on the capacity and availability of electric power. A typical load curve has three parts, namely, base, intermediate, and peak load. Predicting the three (3) system loads accurately in a power system will help power utilities ensure the availability of the supply and to avoid the risk for over- or under- utilization of generation, transmission, and distribution facilities. The goal of this research is to create a suitable model for day-ahead base, intermediate and peak load forecasting of the electric load data provided by a power utility company. This paper presents an approach in predicting the three (3) system loads using K-means clustering and artificial neural networks (ANN). The power utility’s load data was clustered using K-means to determine the daily base, intermediate and peak loads that were then fed into an ANN model that utilized Quick Propagation training algorithm and Gaussian activation function. It was found out that the implemented ANN model generated 2.2%, 1.84%, and 1.4% as the lowest MAPE for base, intermediate, and peak loads, respectively, with highest MAPE below the accepted standard error rate of 5%. The results of this study clearly suggest that with the proper method of data preparation, clustering, and model implementation, ANN can be a viable solution in forecasting the day-ahead base, intermediate, and peak load demand of a power utility.

Author 1: Lemuel Clark P. Velasco
Author 2: Noel R. Estoperez
Author 3: Renbert Jay R. Jayson
Author 4: Caezar Johnlery T. Sabijon
Author 5: Verlyn C. Sayles

Keywords: K-means clustering; artificial neural networks; base intermediate and peak load; day-ahead load forecasting

Download PDF

Paper 11: Proposed an Adaptive Bitrate Algorithm based on Measuring Bandwidth and Video Buffer Occupancy for Providing Smoothly Video Streaming

Abstract: Dynamic adaptive streaming via HTTP (DASH) has been popular disseminated over the Internet especially under the circumstances of the time varying network, which it is currently the most challenging for providing smoothly video streaming via high quality. In DASH system, after completing the download of one segment, the player estimates the available network bandwidth by calculating the downloading throughput and then adapting the video bitrate level based on its estimations. However, the estimated bandwidth in the application layer is not accurate due to off-intervals appearance during the downloading process. To avoid the unfairness of bandwidth estimation by the clients, this work proposes a logarithmic approach for received network bandwidth, which includes increasing or decreasing this bandwidth logarithmically to converge the fair share bandwidth (estimated bandwidth). After obtaining the measured bandwidth, an adaptive bitrate algorithm is proposed by considering this measured bandwidth in addition to video buffer occupancy. The video buffer model is associated with three thresholds (i.e. one for initial startup and two for operating thresholds). When the video buffer’s level stays between the two operating thresholds, the video bitrate will keep unchanged. Otherwise, when the buffer occupancy is too high or too low, an appropriate video bitrate is chosen to avoid buffer overflow/underflow. Simulation results show that the proposed scheme is able to converge the measured bandwidth to the fair share bandwidth very quickly. Also the proposed scheme is compared with conventional scheme, we found that our proposed scheme outperforms in achieving the best performance in terms of efficiency, stability and fairness.

Author 1: Saba Qasim Jabbar
Author 2: Dheyaa Jasim Kadhim
Author 3: Yu Li

Keywords: DASH; video streaming; video buffer; video adaptive bitrate algorithm, QoE

Download PDF

Paper 12: Software Bug Prediction using Machine Learning Approach

Abstract: Software Bug Prediction (SBP) is an important issue in software development and maintenance processes, which concerns with the overall of software successes. This is because predicting the software faults in earlier phase improves the software quality, reliability, efficiency and reduces the software cost. However, developing robust bug prediction model is a challenging task and many techniques have been proposed in the literature. This paper presents a software bug prediction model based on machine learning (ML) algorithms. Three supervised ML algorithms have been used to predict future software faults based on historical data. These classifiers are Naïve Bayes (NB), Decision Tree (DT) and Artificial Neural Networks (ANNs). The evaluation process showed that ML algorithms can be used effectively with high accuracy rate. Furthermore, a comparison measure is applied to compare the proposed prediction model with other approaches. The collected results showed that the ML approach has a better performance.

Author 1: Awni Hammouri
Author 2: Mustafa Hammad
Author 3: Mohammad Alnabhan
Author 4: Fatima Alsarayrah

Keywords: Software bug prediction; faults prediction; prediction model; machine learning; Naïve Bayes (NB); Decision Tree (DT); Artificial Neural Networks (ANNs)

Download PDF

Paper 13: Long-Term Weather Elements Prediction in Jordan using Adaptive Neuro-Fuzzy Inference System (ANFIS) with GIS Techniques

Abstract: Weather elements are the most important parameters in metrological and hydrological studies especially in semi-arid regions, like Jordan. The Adaptive Neuro-Fuzzy Inference System (ANFIS) is used here to predict the minimum and maximum temperature of rainfall for the next 10 years using 30 years’ time series data for the period from 1985 to 2015. Several models were used based on different membership functions, different methods of optimization, and different dataset ratios for training and testing. By combining a neural network with a fuzzy system, the hybrid intelligent system results in a hybrid Neuro-Fuzzy system which is an approach that is good enough to simulate and predict rainfall events from long-term metrological data. In this study, the correlation coefficient and the mean square error were used to test the performance of the used model. ANFIS has successfully been used here to predict the minimum and maximum temperature of rainfall for the coming next 10 years and the results show a good consistence pattern compared to previous studies. The results showed a decrease in the annual average rainfall amounts in the next 10 years. The minimum average annual temperature showed the disappearance of a certain predicted zone by ANFIS when compared to actual data for the period 1985-2015, and the same results behavior has been noticed for the average annual maximum.

Author 1: Omar Suleiman Arabeyyat

Keywords: Rainfall prediction; hybrid intelligent system; Adaptive Neuro-Fuzzy Inference System (ANFIS), GIS; time series prediction; long-term weather forecasting; climate change

Download PDF

Paper 14: Detection Capability and CFAR Loss Under Fluctuating Targets of Different Swerling Model for Various Gamma Parameters in RADAR

Abstract: Target detection of RADAR deals with different and manifold problems over few decades. The detection capability is one of the most significant factors in RADAR system. The main aim of detection is to increase probability of detection while decreasing rate of false alarm. The threshold of detection is modified as a function of the receiver noise level to keep a fixed rate of false alarm. Constant False Alarm Rate (CFAR) processors are used to maintain the amount of false alarm under supervision in a diverse background of interference. In Signal to Noise Ratio (SNR) level, a loss can be occurred due to CFAR processor. Gamma function is used to determine the probability of false alarm. It is assumed in adaptive CFAR that the interference distribution is familiar here. This type of CFAR also approximates the unknown parameters connected with various interference distributions. CFAR loss depends on gamma function. Incomplete gamma function plays an important role in maintaining threshold voltage as well as probability of detection. Changing the value of gamma function can improve the probability of detection for various Swerling Models which are proposed here. This paper has also proposed a technique to compare various losses due to CFAR in terms of different gamma function in presence of different number of pulses for four Swerling Models.

Author 1: Md. Maynul Islam
Author 2: Mohammed Hossam-E-Haider

Keywords: Swerling model; Constant False Alarm Rate (CFAR) loss; false alarm; gamma function; probability of detection

Download PDF

Paper 15: Intelligent Transportation System (ITS) for Smart-Cities using Mamdani Fuzzy Inference System

Abstract: It is estimated that more than half of the world population lives in cities according to (UN forecasts, 2014), so cities are vital. Cities, as we all know facing with complex challenges – for smart cities the outdated traditional planning of transportation, environmental contamination, finance management and security observations are not adequate. The developing framework for smart-city requires sound infrastructure, latest current technology adoption. Modern cities are facing pressures associated with urbanization and globalization to improve quality-of-life of their citizens. A framework model that enables the integration of cloud-data, social network (SN) services and smart sensors in the context of smart cities is proposed. A service-oriented radical framework enables the retrieval and analysis of big data sets stemming from Social Networking (SN) sites and integrated smart sensors collecting data streams for smart cities. Smart cities’ understanding is a broad concept transportation sector focused in this article. Fuzzification is shown to be a capable mathematical approach for modelling traffic and transportation processes. To solve various traffic and transportation problems a detailed analysis of fuzzy logic systems is developed. This paper presents an analysis of the results achieved using Mamdani Fuzzy Inference System to model complex traffic processes. These results are verified using MATLAB simulation.

Author 1: Kashif Iqbal
Author 2: Muhammad Adnan Khan
Author 3: Sagheer Abbas
Author 4: Zahid Hasan
Author 5: Areej Fatima

Keywords: Information Communication Technology (ICT), Internet of Things (IoT), Intelligent Transportation System (ITS), Fuzzy Inference System (MFIS), Traffic Congestion Conditions (TCC), SNA, MF, Mamdani Fuzzy Inference System (MFIS)

Download PDF

Paper 16: Customer Satisfaction Measurement using Sentiment Analysis

Abstract: Besides the traditional methods of targeting customers, social media presents its own set of opportunities. While companies look for a simple way with a large number of responses, social media platforms like Twitter can allow them to do just that. For example, by creating a hashtag and prompting followers to tweet their answers to some question they can quickly get a large number of answers about a question while simultaneously engaging their customers. Additionally, consumers share their opinions about services and products in public and with their social circles. This valuable data can be used to support business decisions. However, it is huge amounts of unstructured data that is difficult to extract meaningful information out of them. Social Media Analytics is the field which makes insights out of social media data and analyzes its sentiment rather than just reading and counting text. In this article, we used Twitter data to get insight from public opinion hidden in data. The support vector machine algorithm is used to classify sentiment of tweets whether it is positive or negative and the unigram applied as a feature extraction method. The experiments were conducted using large set of training dataset and the algorithm achieved high accuracy around 87%.

Author 1: Shaha Al-Otaibi
Author 2: Allulo Alnassar
Author 3: Asma Alshahrani
Author 4: Amany Al-Mubarak
Author 5: Sara Albugami
Author 6: Nada Almutiri
Author 7: Aisha Albugami

Keywords: Social media analytics; sentiment; classification; support vector machine; unigram

Download PDF

Paper 17: Toward Exascale Computing Systems: An Energy Efficient Massive Parallel Computational Model

Abstract: The emerging Exascale supercomputing system expected till 2020 will unravel many scientific mysteries. This extreme computing system will achieve a thousand-fold increase in computing power compared to the current petascale computing system. The forthcoming system will assist system designers and development communities in navigating from traditional homogeneous to the heterogeneous systems that will be incorporated into powerful accelerated GPU devices beside traditional CPUs. For achieving ExaFlops (10^18 calculations per second) performance through the ultrascale and energy-efficient system, the current technologies are facing several challenges. Massive parallelism is one of these challenges, which requires a novel energy-efficient parallel programming (PP) model for providing the massively parallel performance. In the current study, a new parallel programming model has been proposed, which is capable of achieving massively parallel performance through coarse-grained and fine-grained parallelism over inter-node and intra-node architectural-based processing. The suggested model is a tri-level hybrid of MPI, OpenMP and CUDA that is computable over a heterogeneous system with the collaboration of traditional CPUs and energy-efficient GPU devices. Furthermore, the developed model has been demonstrated by implementing dense matrix multiplication (DMM). The proposed model is considered an initial and leading model for obtaining massively parallel performance in an Exascale computing system.

Author 1: Muhammad Usman Ashraf
Author 2: Fathy Alburaei Eassa
Author 3: Aiiad Ahmad Albeshri
Author 4: Abdullah Algarni

Keywords: Exascale computing; high-performance computing (HPC); massive parallelism; super computing; energy efficiency; hybrid programming; CUDA; OpenMP; MPI

Download PDF

Paper 18: Image Contrast Enhancement by Scaling Reconstructed Approximation Coefficients using SVD Combined Masking Technique

Abstract: The proposed method addresses the general issues of image contrast enhancement. The input image is enhanced by incorporating discrete wavelet transform, singular value decomposition, standard intensity deviation based clipped sub image histogram equalization and masking technique. In this method, low pass filtered coefficients of wavelet and its scaled version undergoes masking approach. The scale value is obtained using singular value decomposition between reconstructed approximation coefficients and standard intensity deviation based clipped sub image histogram equalization image. The masking image is added to the original image to produce a maximum contrast-enhanced image. The supremacy of the proposed method tested over other methods. The qualitative and quantitative analysis is used to justify the performance of the proposed method.

Author 1: Sandeepa K S
Author 2: Basavaraj N Jagadale
Author 3: J S Bhat
Author 4: Mukund N Naragund
Author 5: Panchaxri

Keywords: Standard intensity deviation clipped sub image histogram equalization; discrete wavelet transform; singular value decomposition; masking technique

Download PDF

Paper 19: Arijo: Location-Specific Data Crowdsourcing Web Application as a Curriculum Supplement

Abstract: Smart devices are quickly becoming more accessible to the general public. With the proper tools, they can be used to supplement the work of educators. According to studies by Beeland Jr. and Roussou, learning through interaction has been considered to be effective by both students and teachers. This study aimed to develop an interactive curriculum supplement for smart devices in the form of a Location-specific Data Crowdsourcing Web Application (Arijo) which teaches students how to conduct experiments and upload their results to the internet for archival purposes. Arijo was developed with a combination of the Appsheet framework, Adobe Photoshop, and Google Maps. Three core functionalities were programmed: data input/output, data interpretation, and information dissemination. Arijo was able to perform its intended features, such as recording and displaying data within specific locations, along with displaying guides on how to conduct an experiment. Arijo was able to fulfill its main objective, to be a curriculum supplement, through the aforementioned features. In the future, Arijo may be expanded to support more year levels and multiple curriculums because of its modular nature.

Author 1: Justin Banusing
Author 2: Cedrick Jason Cruz
Author 3: Peter John Flores
Author 4: Eisen Ed Briones
Author 5: Gerald Salazar
Author 6: Rhydd Balinas
Author 7: Serafin Farinas

Keywords: Web application; smart devices; data crowdsourcing; curriculum supplement

Download PDF

Paper 20: A Portable Virtual LAB for Informatics Education using Open Source Software

Abstract: The need for students to have hands-on experience is very important in many disciplines to match the requirements of today’s dynamic job market. Informatics, which is the science of information engineering, has been recently integrated into many academic programs. Teaching students the main skills in modern software and web development is essential for them to be successful informatics professionals. For any informatics program, students engage in working on projects as essential parts for some courses in their academic programs. This paper presents the development and evaluation of MiLAB (My Mobile Informatics Lab), a portable virtual lab environment for the teaching and learning of modern web development skills. MiLAB has been integrated into an undergraduate health informatics academic program to improve the teaching and learning of essential web development skills, such as databases management and customization of modern content management systems. The evaluation of MiLAB indicated that it served as an interactive personal environment for students to implement, collaborate, and present their web development projects. Strengths, weaknesses and possible improvements are also discussed.

Author 1: Ali H. Alharbi

Keywords: Virtual labs; open source software; e-learning

Download PDF

Paper 21: LeafPopDown: Leaf Popular Down Caching Strategy for Information-Centric Networking

Abstract: Information-Centric Networking is a name based internet architecture and is considered as an alternate of IP base internet architecture. The in-network caching feature used in ICN has attracted research interests as it reduces network traffic, server overload and minimizes latency experienced by end users. Researchers have proposed different caching policies for ICN aiming to optimize performance metrics, such as cache hits, diversity and eviction operations. In this paper, we propose a novel caching strategy of LeafPopDown for ICN that significantly reduces eviction operation and enhances cache hits and diversity in ICN.

Author 1: Hizbullah Khattak
Author 2: Noor Ul Amin
Author 3: Ikram ud Din
Author 4: Insafullah
Author 5: Jawaid Iqbal

Keywords: Component; information-centric networking; caching; popularity; least recently used

Download PDF

Paper 22: Face Age Estimation Approach based on Deep Learning and Principle Component Analysis

Abstract: This paper presents an approach for age estimation based on faces through classifying facial images into predefined age-groups. However, a task such as the one at hand faces several difficulties because of the different aspects of every single person. Factors like exposure, weather, gender and lifestyle all come into play. While some trends are similar for faces from a similar age group, it is problematic to distinguish the aging aspects for every age group. This paper’s concentration is in four chosen age groups where the estimation takes place. We employed a fast and effective machine learning method: deep learning so that it could solve the age categorization issue. Principal component analysis (PCA) was used for extracting features and reducing face image. Age estimation was applied to three different aging datasets from Morph and experimental results are reported to validate its efficiency and robustness. Eventually, it is evident from the results that the current approach has achieved high classification results compared with support vector machine (SVM) and k-nearest neighbors (K-NN).

Author 1: Noor Mualla
Author 2: Essam H. Houssein
Author 3: Hala H. Zayed

Keywords: Deep learning; principal component analysis; support vector machine; K-NN; age estimation

Download PDF

Paper 23: Development and Validation of a Cooling Load Prediction Model

Abstract: In smart buildings, cooling load prediction is important and essential in the sense of energy efficiency especially in hot countries. Indeed, prediction is required in order to provide the occupant by his consumption and incite him to take right decisions that would potentially decrease his energy demand. In some existing models, prediction is based on a selected reference day. This selection depends on several conditions similarity. Such model needs deep analysis of big past data. Instead of a deep study to well select the reference day; this paper is focusing on a short sampling-rate for predicting the next state. So, this method requires less inputs and less stored data. Prediction results will be more close to the real state. In first phase, an hourly cooling load model is implemented. This model has as input current cooling load, current outside temperature and weather forecast to predict the next hour cooling consumption. To enhance model’s performance and reliability, the sampling period is decreasing to 30 minutes with respect to system dynamic. Lastly, prediction’s accuracy is improved by using previous errors between actual cooling load and prediction results. Simulations are realized in nodes located at a campus showing good adequacy with measurements.

Author 1: Abir Khabthani
Author 2: Leila Châabane

Keywords: Smart building; energy efficiency; prediction; short sampling-rate; less stored data

Download PDF

Paper 24: A Multilingual Datasets Repository of the Hadith Content

Abstract: Knowledge extraction from unstructured data is a challenging research problem in research domain of Natural Language Processing (NLP). It requires complex NLP tasks like entity extraction and Information Extraction (IE), but one of the most challenging tasks is to extract all the required entities of data in the form of structured format so that data analysis can be applied. Our focus is to explain how the data is extracted in the form of datasets or conventional database so that further text and data analysis can be carried out. This paper presents a framework for Hadith data extraction from the Hadith authentic sources. Hadith is the collection of sayings of Holy Prophet Muhammad, who is the last holy prophet according to Islamic teachings. This paper discusses the preparation of the dataset repository and highlights issues in the relevant research domain. The research problem and their solutions of data extraction, pre-processing and data analysis are elaborated. The results have been evaluated using the standard performance evaluation measures. The dataset is available in multiple languages, multiple formats and is available free of cost for research purposes.

Author 1: Ahsan Mahmood
Author 2: Hikmat Ullah Khan
Author 3: Fawaz K. Alarfaj
Author 4: Muhammad Ramzan
Author 5: Mahwish Ilyas

Keywords: Data extraction; preprocessing; regex; Hadith; text analysis; parsing

Download PDF

Paper 25: A 1NF Data Model for Representing Time-Varying Data in Relational Framework

Abstract: Attaching Date and Time to varying data plays a definite role in representing a dynamic domain and resources on the database systems. The conventional database stores current data and can only represent the knowledge in static sense, whereas Time-varying database represents the knowledge in dynamic sense. This paper focuses on incorporating interval-based timestamping in First Normal Form (1NF) data model. 1NF approach has been chosen for the easily implementation in relational framework as well as to provide the temporal data representation with the modeling and querying power of relational data model. Simulation results revealed that the proposed approach substantially improved the performance of temporal data representation in terms of required memory storage and queries processing time.

Author 1: Nashwan Alromema
Author 2: Fahad Alotaibi

Keywords: Time-oriented data model; time-varying data, valid-time data; transaction time data; bitemporal data; data model; N1NF; 1NF

Download PDF

Paper 26: Sentiment Analysis using SVM: A Systematic Literature Review

Abstract: The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons.

Author 1: Munir Ahmad
Author 2: Shabib Aftab
Author 3: Muhammad Salman Bashir
Author 4: Noureen Hameed

Keywords: Sentiment analysis; polarity detection; machine learning; SVM; support vector machine; SLR; systematic literature review

Download PDF

Paper 27: Mining Trending Hash Tags for Arabic Sentiment Analysis

Abstract: People text millions of posts everyday on microblogging social networking especially Twitter which make microblogs a rich source for public opinions, customer’s comments and reviews. Companies and public sectors are looking for a way to measure the public response and feedback on particular service or product. Sentiment analysis is an encouraging technique capable to sense the public opinion in a fast and less cost tactic than traditional survey methods like questionnaires and interviews. Various sentiment methods were developed in many languages, such as English and Arabic with much more studies in the first one. Sometime, hash tags are misleading or may have a title that does not really reflects the subject. Tweets in trend hash tags may contain keyword or topics titles better represent the subject of the hash tag. This research aims at proposing an approach to explore Twitter Hash tag trends to retrieve tweets, group retrieved tweets to learn topics’ profiles, do sentiment analysis to test the subjectivity of tweets then develop a prediction model using deep learning to classify a new tweet to the appropriate topic profile. Arabic hash tags trends have been used to evaluate the proposed approach. The performance of the proposed approach (clustering topics within hashtag trend to learn topics profiles then do sentiment analysis) shows better accuracy than sentiment analysis without clustering the topics.

Author 1: Yahya AlMurtadha

Keywords: Arabic sentiment analysis; twitter; opinion mining; trending hashtags; text analysis; deep learning

Download PDF

Paper 28: A New Task Scheduling Algorithm using Firefly and Simulated Annealing Algorithms in Cloud Computing

Abstract: Task scheduling is a challenging and important issue, which considering increases in data sizes and large volumes of data, has turned into an NP-hard problem. This has attracted the attention of many researchers throughout the world since cloud environments are in fact homogenous systems for maintaining and processing practical applications needed by users. Thus, task scheduling has become extremely important in order to provide better services to users. In this regard, the present study aims at providing a new task-scheduling algorithm using both firefly and simulated annealing algorithms. This algorithm takes advantage of the merits of both firefly and simulated annealing algorithms. Moreover, efforts have been made in regards to changing the primary population or primary solutions for the firefly algorithm. The presented algorithm uses a better primary solution. Local search was another aspect considered for the new algorithm. The presented algorithm was compared and evaluated against common algorithms. As indicated by the results, compared to other algorithms, the presented method performs effectively better in reducing to make span using different number of tasks and virtual machines.

Author 1: Fakhrosadat Fanian
Author 2: Vahid Khatibi Bardsiri
Author 3: Mohammad Shokouhifar

Keywords: Firefly; make span; simulated annealing; task scheduling; cloud

Download PDF

Paper 29: The Proposed Model to Increase Security of Sensitive Data in Cloud Computing

Abstract: There is a complex problem regarding security of data in cloud, it becomes more critical when the data in question is highly sensitive. One of the main approaches to overcome this problem is the encryption data at rest, which comes with its own difficulties such as efficient key management, access permissions and similar. In this paper, we propose a new approach to security that is controlled by the IT Security Specialist (ITSS) of the company/organization. The approach is based on multiple strategies of file encryption, partitioning and distribution among multiple storage providers, resulting in increased confidentiality since a supposed attacker will need to first obtain parts of a file from different storage providers, know how to combine them, before any decryption attempt. All details of the strategy used for a particular file are stored on a separate file, which can be considered as a master key for the file contents. Also, we will present each strategy with the results and comments related to the realized measurements.

Author 1: Dhuratë Hyseni
Author 2: Besnik Selimi
Author 3: Artan Luma
Author 4: Betim Cico

Keywords: ITSS-IT security specialist; partitioning; confidentiality; cloud service provider; cloud service client; platform as a service; service as a service; third party auditor

Download PDF

Paper 30: Improvement of the Frequency Characteristics for RFID Patch Antenna based on C-Shaped Split Ring Resonator

Abstract: In this paper, we present a new technique for improving frequency characteristics and miniaturizing the geometric dimension of the RFID patch antenna that operates in the SHF band. This technique consists in implementing a periodic network based on a new model of a split C-ring resonator on the square slots of radiating rectangular patch antenna. The results of the simulation have proved that the proposed technique has an excellent radiation efficiency and size reduction compared to the reference patch antenna.

Author 1: Mahdi Abdelkarim
Author 2: Seif Naoui
Author 3: Lassad Larach
Author 4: Ali Gharsallah

Keywords: Rectangular patch antenna; C-shaped split ring resonator; RFID reader antenna; metamaterials antenna

Download PDF

Paper 31: Impact of Web 2.0 on Digital Divide in AJ&K Pakistan

Abstract: Digital divide is normally measured in terms of gap between those who can efficiently use new technological tools, such as internet, and those who cannot. It was also hypothesized that web 2.0 tools motivate people to use technology i.e. social networking sites can play an important role in bridging digital gap. The study was conducted to determine the presence of digital divide in urban and rural areas of district Muzaffrabad, Azad Jammu & Kashmir. A cross-sectional community based survey was conducted involving 384 respondents from city Muzaffrabad and village Garhi Doppta. The existence of digital divide was assessed on the basis of the questionnaires given. Chi-square test was applied to find the association of different demographic and ICT related factors with internet usage. Despite the growing awareness there are possibilities of gender, age and area based digital divide. Outcomes of the survey affirmed that web 2.0 based web-sites are becoming popular and attracting people to use internet.

Author 1: Sana Shokat
Author 2: Rabia Riaz
Author 3: Sanam Shahla Rizvi
Author 4: Farina Riaz
Author 5: Samaira Aziz
Author 6: Raja Shoaib Hussain
Author 7: Mohaib Zulfiqar Abbasi
Author 8: Saba Shabir

Keywords: AJ&K; digital divide; ICT; web2.0; social networking; social inclusion and cohesion enabling approaches; net-living life styling personalization and optimization; subjective human and social factors; well-being through net living

Download PDF

Paper 32: Studying the Impact of Water Supply on Wheat Yield by using Principle Lasso Radial Machine Learning Model

Abstract: Wheat plays a vital role in the food production as it fulfills 60% requirements of calories and proteins to the 35% of the world population. Owing to wheat importance in food, wheat demand is increasing continuously. Wheat yield is committed to the availability of water supply. Due to climatic and environmental variations of different countries, water supply is not available in constant and desire quantity that is necessary for better wheat yield. So, there is a strong relationship and dependency that exists between water supply and wheat yield. Therefore, water supply is becoming an issue because it directly effects wheat yield. In this research, a Principle Lasso Radial (PLR) model is proposed using Machine Learning technique to measure the effect of water supply on wheat yield. In this Principle Lasso Radial (PLR) model, various experiments are conducted with respect to the performance metrics, i.e. relative water contents, waxiness, grain per spike and plant height. Principle Lasso Radial (PLR) model’s produced reduced dimensional data with respect to performance metrics. That data is provided to Radial Basis Neural Network (RBNN), and it showed regression values R under different water supply conditions. Principle Lasso Radial (PLR) model achieved an accuracy of 89% among variance Machine Learning techniques.

Author 1: Muhammad Adnan
Author 2: M. Abid
Author 3: M. Ahsan Latif
Author 4: Abaid-ur-Rehman
Author 5: Naheed Akhter
Author 6: Muhammad Kashif

Keywords: Radial basis function (RBF); Radial Basis Neural Network (RBNN); ANN; lasso; principle component analysis (PCA)

Download PDF

Paper 33: An Improvement of FA Terms Dictionary using Power Link and Co-Word Analysis

Abstract: Information retrieval involves obtaining some wanted information in a database. In this paper, we used the power link to improve the extracted field association terms from corpus by the proposed algorithm to support the machine to take the right decision and attach the candidate words in their convenient position in dictionary of the field association terms. Power Link is used as a quantitative tool to compute the co-citation relation among two words depending on the co-frequency and distances among instances of the words. In this paper, concept of the Power Link as well as modifications of the rules is used to classify the scientific papers into its proper field. In this paper, instead of whole document, a given document will be divided into three parts, namely, title, abstract and body. A given term will be given a weight that depends on the location of the term inside a specific document. The greatest weight will be given to the title then the abstract then the body, respectively. Results show an improvement in precision, recall and F measure.

Author 1: El-Sayed Atlam
Author 2: Dawlat A. El A.Mohamed
Author 3: Fayed Ghaleb
Author 4: Doaa Abo-Shady

Keywords: Information retrieval; FA terms; co-word analysis; power link; precision; recall

Download PDF

Paper 34: An Improved Social Media Analysis on 3 Layers: A Real Time Enhanced Recommendation System

Abstract: The Internet can be considered as an open field for expression regarding products, politics, ideas, and people. Those expressive interactions generate a large amount of data pinned per users and groups. In that scope, Big data along with various technologies, such as social media, cloud computing, and machine learning can be used as a toolbox to make sense of the data and give the opportunity to generate efficient analysis and studies of the individuals and crowds regarding market orientation, politics, and industry. The recommendation system for this acts as the pillars of technology, in the field of sentiment analysis and predictive analysis to make sense of user’s data. However, this complex operation comes at the price of this. To each analysis, there is a personalized architecture and tool. In this paper, a novel design of a recommender system is provided powered by sentiment analysis and predictive models applied onto an example of data flow from the social media Twitter.

Author 1: Mohamed Amine TALHAOUI
Author 2: Hicham AIT EL BOUR
Author 3: Reda MOULOUKI
Author 4: Saida NKIRI
Author 5: Mohamed AZOUAZI

Keywords: Twitter; machine learning; sentiment; Lambda; recommendation; Big data; opinion mining

Download PDF

Paper 35: Machine Learning Method To Screen Inhibitors of Virulent Transcription Regulator of Salmonella Typhi

Abstract: The PhoP regulon, a two-component regulatory system is a well-studied system of Salmonella enterica serotype typhi and has proved to play a crucial role in the pathophysiology of typhoid as well as the intercellular survival of the bacterium within host macrophages. The absence of PhoP regulon in the human system makes regulatory proteins of PhoP regulon for target specific for future drug discovery program against multi-drug resistant strains of Salmonella enterica serotype typhi. In recent years, high-throughput screening method has proven to be a reliable source of hit finding against various diseases including typhoid. However, the cost and time involved in HTS are of significant concern. Therefore, there is still a need for an expedient method which is also reliable in screening active hits molecules as well as less time consuming and inexpensive. In this regards, the application of machine learning (ML) based chemoinformatics model to perform HTS of drug-like hit molecules against MDR strain of Salmonella enterica serotype typhi is the most applicable. In this study, bagging and gradient boosting based ML algorithm was used to build a predictive classification model to perform virtual HTS of active inhibitors of the PhoP regulon of Salmonella enterica serotype typhi. The eXtreme Gradient Boosting (XGBoost) based classification model was comparatively accurate and sensitive in classifying active drug-like inhibitors of PhoP regulon of Salmonella enterica serotype typhi.

Author 1: Syed Asif Hassan
Author 2: Atif Hassan
Author 3: Tabrej Khan

Keywords: Typhoid; PhoP regulon, classification model; machine learning (ML) algorithm; eXtreme Gradient Boosting; random forest; sensitivity; accuracy

Download PDF

Paper 36: Comparative Performance of Deep Learning and Machine Learning Algorithms on Imbalanced Handwritten Data

Abstract: Imbalanced data is one of the challenges in a classification task in machine learning. Data disparity produces a biased output of a model regardless how recent the technology is. However, deep learning algorithms, such as deep belief networks showed promising results in many domains, especially in image processing. Therefore, in this paper, we will review the effect of imbalanced data disparity in classes using deep belief networks as the benchmark model and compare it with conventional machine learning algorithms, such as backpropagation neural networks, decision trees, naïve Bayes and support vector machine with MNIST handwritten dataset. The experiment shows that although the algorithm is stable and suitable for multiple domains, the imbalanced data distribution still manages to affect the outcome of the conventional machine learning algorithms.

Author 1: A’inur A’fifah Amri
Author 2: Amelia Ritahani Ismail
Author 3: Abdullah Ahmad Zarir

Keywords: Deep belief networks; support vector machine; back propagation neural networks; imbalanced handwritten data; classification

Download PDF

Paper 37: Insulator Detection and Defect Classification using Rotation Invariant Local Directional Pattern

Abstract: Detecting power line insulator automatically and analyzing their defects are vital processes in maintaining power distribution systems. In this work, a rotation invariant texture pattern named rotation invariant local directional pattern (RI-LDP) is proposed for representing insulator image. For this at first, local directional pattern (LDP) is applied on image which can encode local texture pattern into an eight bit binary code by analyzing magnitude of edge response in eight different directions. Finally this LDP code is made robust to rotation by meticulously rearranging the generated another binary code which named as rotation invariant local directional pattern (RI-LDP). Insulator detection is carried out where this RI-LDP based histogram act as a feature vector and support vector machine (SVM) plays the role of the classifier. The detected insulator image region is further analyzed for possible defect identification. For this, an automatic extraction method of the individual insulator caps is proposed. The defect in segmented insulators is analyzed using LDP texture feature on individual cap region. We evaluated the proposed method using two sets of 493 real-world insulator images captured from a ground vehicle. The proposed insulator detector shows comparable performance to state-of-the-arts and our defect analysis method outperforms existing methods.

Author 1: Taskeed Jabid
Author 2: Tanveer Ahsan

Keywords: Insulator detection; insulator defect analysis; local direction pattern (LDP); rotation invariant local directional pattern (RI-LDP); support vector machine (SVM)

Download PDF

Paper 38: Machine-Learning Techniques for Customer Retention: A Comparative Study

Abstract: Nowadays, customers have become more interested in the quality of service (QoS) that organizations can provide them. Services provided by different vendors are not highly distinguished which increases competition between organizations to maintain and increase their QoS. Customer Relationship Management systems are used to enable organizations to acquire new customers, establish a continuous relationship with them and increase customer retention for more profitability. CRM systems use machine-learning models to analyze customers’ personal and behavioral data to give organization a competitive advantage by increasing customer retention rate. Those models can predict customers who are expected to churn and reasons of churn. Predictions are used to design targeted marketing plans and service offers. This paper tries to compare and analyze the performance of different machine-learning techniques that are used for churn prediction problem. Ten analytical techniques that belong to different categories of learning are chosen for this study. The chosen techniques include Discriminant Analysis, Decision Trees (CART), instance-based learning (k-nearest neighbors), Support Vector Machines, Logistic Regression, ensemble–based learning techniques (Random Forest, Ada Boosting trees and Stochastic Gradient Boosting), Naïve Bayesian, and Multi-layer perceptron. Models were applied on a dataset of telecommunication that contains 3333 records. Results show that both random forest and ADA boost outperform all other techniques with almost the same accuracy 96%. Both Multi-layer perceptron and Support vector machine can be recommended as well with 94% accuracy. Decision tree achieved 90%, naïve Bayesian 88% and finally logistic regression and Linear Discriminant Analysis (LDA) with accuracy 86.7%.

Author 1: Sahar F. Sabbeh

Keywords: Customer relationship management (CRM); customer retention; analytical CRM; business intelligence; machine-learning; predictive analytics; data mining; customer churn

Download PDF

Paper 39: Crowd Counting Mapping to make a Decision

Abstract: Congestion typically occurs when the number of crowds exceeds the capacity of facilities. In some cases, when buildings have to be evacuated, people might be trapped in congestion and cannot escape from the building early enough which might even lead to stampedes. Crowd Congestion Mapping (CCM) is a system that enables organizations to find information about the crowd congestion in target places. This project provides the ability to make the right decision to determine the reasons that led to that and to do the appropriate procedures to avoid this from happening again by optimizing locations and dimensions of the emergency exits less congested path on the target places. The system collects crowd congestion data from the locations and makes it available to corporations via target map. The congestion is plotted on target place map, for example, the red line for highly congested location, the pink line for mildly congested location and green line for free flow of humans in the location.

Author 1: Enas Faisal
Author 2: Azzam Sleit
Author 3: Rizik Alsayyed

Keywords: Crowd; map; image processing; human detection; threshold; recognition

Download PDF

Paper 40: Quality of Service Impact on Deficit Round Robin and Stochastic Fair Queuing Mechanism in Wired-cum-Wireless Network

Abstract: The deficient round robin (DRR) and stochastic fair queue (SFQ) are the active queue mechanism (AQM) techniques. These AQM techniques play important role in buffer management in order to control the congestion in the wired-cum-wireless network by dropping packets during the buffer overflow or near to overflow. This research study focus on the performance evaluation of the DRR and SFQ using different scenarios such as increasing number of node scenario, pause time scenario and mobility scenario. We evaluate the performance of DRR and SFQ based on two parameters such as average packet delay and average packet dropped. In case of increasing number of nodes, the SFQ has outperformed than DRR by having comparatively low per packet delay. DRR has higher packet dropped ratio as compare to SFQ. In mobility and pause time scenario, SFQ has less per packet delay while DRR has less packet dropped ratio These results revealed that DRR performance was affected by an increase in the number of nodes in a network. The DRR send the packet in a round-robin fashion without caring about the bandwidth of a path due to which the packet dropped ratio was high. On another hand, the SFQ has comparatively outperformed in all scenarios by having less per packet delay. SFQ become aggressive by dropping more data packets during buffer overflow. In short, SFQ will be preferred for a network where the congestion occurred more frequently.

Author 1: Fahim Khan Khalil
Author 2: Samiullah Khan
Author 3: Farooq Faisal
Author 4: Mahmood Nawaz
Author 5: Farkhanda Javed
Author 6: Fawad Ali Khan
Author 7: Rafidah MD Noor
Author 8: Matiullah
Author 9: Zia ullah
Author 10: Muhammad Shoaib
Author 11: Faqir Usman Masood

Keywords: Active queue management; deficit round robin; stochastic fair queuing

Download PDF

Paper 41: Effect of Increasing Number of Nodes on Performance of SMAC, CSMA/CA and TDMA in MANETs

Abstract: The importance of Wireless Sensor Network (WSN) increases due to deployment for geographical, environmental and surveillance purpose in war fields. WSN facing several challenges due to its complex nature including key problems, such as routing and medium access control protocols. Several approaches were proposed for the performance evaluation of WSN on the basis of these issues due to the fact that MAC layer access protocols have a great impact on the performance of WSN. In this paper, we investigated the performance evaluation of three well known MAC Access protocols, i.e. sensor medium access control protocol (SMAC), carrier sense multiple access with collision avoidance (CSMA/CA), and time division multiple access (TDMA) over adhoc on demand distance vector (AODV) routing protocol. The number of simulation scenarios were carried out by using NS- 2, the simulation metrics used are throughput, end-to-end delay and energy consumed. Simulation results showed that SMAC out perform CSMA/CA and TDMA by consuming less energy, less end to end delay and high throughput due to contention based approach to access the medium for transmission.

Author 1: Samiullah Khan
Author 2: Farooq Faisal
Author 3: Mahmood Nawaz
Author 4: Farkhanda Javed
Author 5: Fawad Ali Khan
Author 6: Rafidah MD Noor
Author 7: Matiullah
Author 8: Zia ullah
Author 9: Muhammad Shoaib
Author 10: Faqir Usman Masood

Keywords: Medium access control (MAC); sensor medium access control (SMAC); time division multiple access (TDMA); carrier sense multiple access with collision avoidance (CSMA/CA); ad-hoc on demand distance vector (AODV)

Download PDF

Paper 42: Dynamic Reconfiguration of LPWANs Pervasive System using Multi-agent Approach

Abstract: The development of the Low Power Wide Area Network (LPWAN) has given new hope for the Internet of Things and M2M networks to become the most prevalent network type in industrial world in the near future. This type of network is designed to connect several entities in a radius that can reach up to 10 Km. This gain in scope is possible through a reduction in the amount of information exchanged. The latter will makes LPWANs Networks the most suitable for telemetry applications. This large network coverage offered by LPWAN gives the possibility to connect a large number of objects. On the other hand, it involves difficulties for the pervasive system associated with this kind of network to integrate dynamically all this objects, in which an automatic reconfiguration process becomes crucial to this kind of network. In this study, we propose the multi-agent systems as a solution to virtualize the heterogeneity of the peripherals and to facilitate their integration and their dynamic exploitation in the LPWAN system. That virtualization is possible due to the portability of the multi-agent systems and the standardization of the exploitation of the services offered by them.

Author 1: Ghouti ABDELLAOUI
Author 2: Fethi Tarik BENDIMERAD

Keywords: Dynamic reconfiguration; pervasive system; multiagent system; Low Power Wide Area Network (LPWAN)

Download PDF

Paper 43: Impact of Thyristor Controlled Series Capacitor on Voltage Profile of Transmission Lines using PSAT

Abstract: In power system voltage stability is very important in order to maintain the voltage within the defined limits. The demand of electrical power increases in the last decade due to the lack of expansion in the generation and transmission network. The available transmission network is heavily loaded. This loading of transmission network cause the voltage instability. This instability on heavy loaded system creates voltage collapse which causes the power loss. Due to this phenomenon it is necessary to keep monitoring the voltage instability and to reduce voltage collapse. Voltage collapse mostly occurs due to the minimum availability of the reactive power. A power electronic device i.e. Flexible AC Transmission System (FACTS) is used to add the reactive power to the system which improve the voltage profile and minimizes the chances due to which the voltage collapse occur. In this paper FACTS series compensator, i.e. Thyristor Controlled Series Capacitors (TCSC) is injected between the two nodes of the IEEE 6 bus bar test system to check the voltage profile. PSAT (Power System Analysis Tool) tool, which is a new tool in MATLAB for study in power system analysis is used. IEEE-6 bus system is used as a test system for the effectiveness of the proposed method. The voltage profile with and without TCSC device is then compared to conclude the result.

Author 1: Babar Noor
Author 2: Muhammad Aamir Aman
Author 3: Murad Ali
Author 4: Sanaullah Ahmad
Author 5: Fazal Wahab Karam

Keywords: Flexible AC Transmission System (FACTS); Thyristor Controlled Series Capacitors (TCSC); Power System Analysis Tool (PSAT) tool; voltage profile; voltage collapse

Download PDF

Paper 44: Efficiency and Performance Analysis of a Sparse and Powerful Second Order SVM Based on LP and QP

Abstract: Productivity analysis is done on the new algorithm “Second Order Support Vector Machine (SOSVM)”, which could be thought as an offshoot of the popular SVM and based on its conventional QP version as well as the LP one. Our main goal is to produce a machine which is: 1) sparse & efficient; 2) powerful (kernel based) but not overfitted; 3) easily realizable. Experiments on benchmark data shows that to classify a new pattern, the proposed machine, SOSVM requires samples up to as little as 2.7% of original data set or 4.8% of conventional QP SVM or 48.3% of Vapnik’s LP SVM, which is already sparse. Despite this heavy test cost reduction, its classification accuracy is very similar to the most powerful QP SVM while being very simple to be produced. Moreover, two new terms called “Generalization Failure Rate (GFR)” and “Machine-Accuracy-Cost (MAC)” are defined to measure generalization-deficiency and accuracy-cost of a detector, respectively and used to compare such among different machines. Results show that our machine possesses GFR up to as little as 1.4% of the QP SVM or 1.5% of Vapnik’s LP SVM and MAC up to as little as 2.6% of the QP SVM or 35.9% of the Vapnik’s sparse LP SVM. Finally, having only two types of parameters to tune, this machine is straight forward and cheaper to be produced compared to the most popular & state-of-the-art machines in this direction. These collectively fulfill the three key goals that the machine is built for.

Author 1: Rezaul Karim
Author 2: Amit Kumar Kundu

Keywords: Generalization failure rate; Kernel machine; LP; QP; machine accuracy cost; Second Order Support Vector Machine; sparse

Download PDF

Paper 45: A Fuzzy based Soft Computing Technique to Predict the Movement of the Price of a Stock

Abstract: Soft computing is a part of an artificial intelligence, and fuzzy logic is the study of fuzziness on data. The combination of these two techniques can provide an intelligent system with more ability and flexibility. The nature of data in the stock/capital market is more complex and challenging to predict the movement of the price of the stock. The study has combined both fuzzy c-means and neural network technique for the prediction of the price of the stock. The research finds an optimum solution to predict the future price of a stock. The comparison of time and space complexity proved that the proposed method is better than the existing methods.

Author 1: Ashit Kumar Dutta

Keywords: Soft computing; fuzzy logic; stock recommendation; fuzzy based soft computing, soft computing systems

Download PDF

Paper 46: A Compact Modified Square Printed Planar Antenna for UWB Microwave Imaging Applications

Abstract: In this paper, both frequency and time domain performances of a new compact planar antenna for the ultra-wideband (UWB) applications are fully investigated. The proposed antenna has the size of 12x18 mm² providing a fractional bandwidth more than 128123% (3.057 GHz to 13.98 GHz, S11<-10 dB). The results show that the proposed antenna performances in term of wide bandwidth, small size, gain and radiation pattern, transmission coefficient and system fidelity factor are very satisfactory. Moreover, by fabricating and testing the proposed antenna, the simulation results are fairly verified.

Author 1: Djamila Ziani
Author 2: Sidi Mohammed Meriah
Author 3: Lotfi Merad

Keywords: Modified square planar antenna; high bandwidth, return loss; UWB antenna; low profile; frequency and time domain analysis

Download PDF

Paper 47: Time-Dependence in Multi-Agent MDP Applied to Gate Assignment Problem

Abstract: Many disturbances can impact gate assignments in daily operations of an airport. Gate Assignment Problem (GAP) is the main task of an airport to ensure smooth flight-to-Gate assignment managing all disturbances. Or, flights schedule often undergoes some unplanned disruptions, such as weather conditions, gate availability or simply a delay that usually arises. A good plan to GAP should manage as possible stochastic events and include all in the planning of assignment. To build a robust model taking in account eventual planning disorder, a dynamic stochastic vision based on Markov Decision Process theory is designed. In this approach, gates are perceived as collaborative agents seeking to accomplish a specific set of flights assignment tasks as provided by a centralized controller. Multi-agent reasoning is then coupled with time dependence aptitude with both time-dependent action durations and stochastic state transitions. This reflection will enable setting up a new model for the GAP powered by a Time-dependent Multi-Agent Markov Decision Processes (TMMDP). The use of this model can provide to controllers at the airport a robust prior solution in every time sequence rather than bringing a risk of online schedule adjustments to handle uncertainty. The solution of this model is a set of optimal decisions time valuated to be made in each case of traffic disruption and at every moment.

Author 1: Oussama AOUN
Author 2: Abdellatif EL AFIA

Keywords: Time-dependent Multi-Agent Markov Decision Processes; stochastic programming; flight delays; Gate Assignment Problem

Download PDF

Paper 48: A Novel DDoS Floods Detection and Testing Approaches for Network Traffic based on Linux Techniques

Abstract: In Today’s Digital World, the continuous interruption of users has affected Web Servers (WSVRs), through Distributed Denial-of-Service (DDoS) attacks. These attacks always remain a massive warning to the World Wide Web (WWW). These warnings can interrupt the accessibility of WSVRs, completely by disturbing each data processing before intercommunication properties over pure dimensions of Data-Driven Networks (DDN), management and cooperative communities on the Internet technology. The purpose of this research is to find, describe and test existing tools and features available in Linux-based solution lab design Availability Protection System (Linux-APS), for filtering malicious traffic flow of DDoS attacks. As source of malicious traffic flow taken most widely used DDoS attacks, targeting WSVRs. Synchronize (SYN), User Datagram Protocol (UDP) and Internet Control Message Protocol (ICMP) Flooding attacks are described and different variants of the mitigation techniques are explained. Available cooperative tools for manipulating with network traffic, like; Ebtables and Iptables tools are compared, based on each type of attacks. Specially created experimental network was used for testing purposes, configured filters servers and bridge. Inspected packets flow through Linux-kernel network stack along with tuning options serving for increasing filter server traffic throughput. In the part of contribution as an outcomes, Ebtables tool appears to be most productive, due to less resources it needed to process each packet (frame). Pointed out that separate detecting system is needed for this tool, in order to provide further filtering methods with data. As main conclusion, Linux-APS, solutions provide full functionality for filtering malicious traffic flow of DDoS attacks either in stand-alone state or combined with detecting systems.

Author 1: Muhammad Tahir
Author 2: Mingchu Li
Author 3: Naeem Ayoub
Author 4: Usman Shehzaib
Author 5: Atif Wagan

Keywords: DDoS attacks; floods detection; Linux-APS architecture; mitigation techniques; network traffic; netfilter; testing approaches

Download PDF

Paper 49: Choice of Knowledge Representation Model for Development of Knowledge Base: Possible Solutions

Abstract: In current society knowledge, information and intelligent computer systems based on knowledge base play a great role. The ability of an intelligent system to efficiently implement its functions depends on the efficiency of organizing knowledge base, and on the fact whether the applied knowledge representation models comply with the set requirements. The article is devoted to the research of the problem of choosing the knowledge representation models. Based on the requirement analysis for knowledge representation models, one of the solutions for the researched problem shown is application of extended semantic networks. Analysis of extended semantic networks’ properties is carried out, as well as relevant examples of representing knowledge of extended semantic networks’ application for various spheres offered.

Author 1: Sabina Katalnikova
Author 2: Leonids Novickis

Keywords: Extended semantic networks; knowledge base; knowledge representation model; semantic networks

Download PDF

Paper 50: Behavior of the Minimum Euclidean Distance Optimization Precoders with Soft Maximum Likelihood Detector for High Data Rate MIMO Transmission

Abstract: The linear closed loop Multiple-input Multiple-output (CL-MIMO) precoding techniques characterized by the channel state information knowledge (CSI), at both sides of the link, aims to improve information throughput and reduce the bit error rate in the communication system. The processing involves multiplying a signal by a precoding matrix, computing from the CSI with some optimized criteria. In this paper, we proposed a new concatenation of the precoders optimizing the minimal Euclidean distance with soft Maximum Likelihood (soft-ML) detection. We analyze the performance in terms of bit error rate (BER) for the proposed association with the three well-known quantized precoders: Maximum of minimum Euclidean distance (Max-dmin) precoder, Orthogonalized Spatial Multiplexing precoder (POSM), and Orthogonalized Spatial Multiplexing (OSM) based on the same criteria, in coded MIMO system over a Rayleigh fading channel, using Quadrature Amplitude Modulation (QAM). Simulations show the interest of the proposed association of the dmin-based precoder with a soft - Ml detector, and the best result is achieved for Max-dmin precoder.

Author 1: MAHI Sarra
Author 2: BOUACHA Abdelhafid

Keywords: MIMO; max-dmin; POSM; singular values decomposition (SVD); soft-ML detector

Download PDF

Paper 51: Comparative Analysis of Evolutionary Algorithms for Multi-Objective Travelling Salesman Problem

Abstract: The Evolutionary Computation has grown much in last few years. Inspired by biological evolution, this field is used to solve NP-hard optimization problems to come up with best solution. TSP is most popular and complex problem used to evaluate different algorithms. In this paper, we have conducted a comparative analysis between NSGA-II, NSGA-III, SPEA-2, MOEA/D and VEGA to find out which algorithm best suited for MOTSP problems. The results reveal that the MOEA/D performed better than other three algorithms in terms of more hypervolume, lower value of generational distance (GD), inverse generational distance (IGD) and adaptive epsilon. On the other hand, MOEA-D took more time than rest of the algorithms.

Author 1: Nosheen Qamar
Author 2: Nadeem Akhtar
Author 3: Irfan Younas

Keywords: Evolutionary computation; algorithms; NSGA-II; NSGA-III; MOEA-D; comparative analysis

Download PDF

Paper 52: Teen’s Social Media Adoption: An Empirical Investigation in Indonesia

Abstract: Social media has reached their popularity in the past decade. Indonesia has more than 63 million social media users who are accessing their account through mobile phone and therefore Indonesia is the third largest users in the world after United States and India. This study is attempted to determine the factors affecting user behaviour intention of social media usage. TAM (Technology Acceptance Model) for Social Media by Rauniar et al. is adopted to provide empirical evidence of teens in Indonesia. Data were collected through questionnaire survey and hypotheses are analyzed with SEM (Structural Equation Modeling). Result shows that factor affecting Indonesian teens in using social media is perceived usefullness (PP), while Trustworthiness (TW) has no significant influence towards thier intention to use social media.

Author 1: Ari Kusyanti
Author 2: Harin Puspa Ayu Catherina
Author 3: Dita Rahma Puspitasari
Author 4: Yustiyana April Lia Sari

Keywords: Social media; Technology Acceptance Model (TAM); user behavior; perceived usefulness; trust; intention; actual use; Structural Equation Modeling (SEM)

Download PDF

Paper 53: Securely Eradicating Cellular Dependency for E-Banking Applications

Abstract: Numerous applications are available on the Internet for the exchange of personal information and money. All these applications need to authenticate the users to confirm their legitimacy. Currently, the most commonly employed credentials include static passwords. But people tend to behave carelessly in choosing their passwords to avoid the burden of memorizing complex passwords. Such frail password habits are a severe threat to the various services available online especially electronic banking or e-banking. For eradicating the necessity of creating and managing passwords, a variety of solutions are prevalent, the traditional ones being the usage of One-Time-Password (OTP) that refers to a single session/transaction password. However, the majority of the OTP-based security solutions fail to satisfy the usability or scalability requirements and are quite vulnerable owing to their reliance on multiple communication channels. In this paper, the most reliable and adoptable solution which provides better security in online banking transactions is proposed. This is an initiative to eradicate the dependency on Global System for Mobile communication (GSM) that is the most popular means of sending the One-Time-Passwords to the users availing e-banking facilities.

Author 1: Bisma Rasool Pampori
Author 2: Tehseen Mehraj
Author 3: Burhan Ul Islam Khan
Author 4: Asifa Mehraj Baba
Author 5: Zahoor Ahmad Najar

Keywords: E-banking; one time password (OTP); global system for mobile communication (GSM); authentication

Download PDF

Paper 54: A Review and Classification of Widely used Offline Brain Datasets

Abstract: Brain Computer Interfaces (BCI) are a natural extension to Human Computer Interaction (HCI) technologies. BCI is especially useful for people suffering from diseases, such as Amyotrophic Lateral Sclerosis (ALS) which cause motor disabilities in patients. To evaluate the effectiveness of BCI in different paradigms, the need of benchmark BCI datasets is increasing rapidly. Although, such datasets do exist, a comparative study of such datasets is not available to the best of our knowledge. In this paper, we provided a comprehensive overview of various BCI datasets. We briefly describe the characteristics of these datasets and devise a classification scheme for them. The comparative study provides feature extractors and classifiers used for each dataset. Moreover, potential use-cases for each dataset are also provided.

Author 1: Muhammad Wasim
Author 2: Muhammad Sajjad
Author 3: Farheen Ramzan
Author 4: Usman Ghani Khan
Author 5: Waqar Mahmood

Keywords: BCI; dataset; brain-computer interface; amyotrophic lateral sclerosis; classification

Download PDF

Paper 55: A Novel Design of Pilot Aided Channel Estimation for MIMO-CDMA System

Abstract: In order to estimate a fading channel characteristics, a pilot signal is propogated with traffic channel. Fading channel parameter estimation is of paramount importance as it may be utilized to design different equalization techniques. It may also be utilized to allocate weights of rake receiver to sturdiest multipaths as well as coherent reception and weighted combination of multipath constituents of wireless communication systems. In this paper, a pilot aided channel estimating technique for MIMO-CDMA systems is presented. This technique utilizes minimum mean squared error estimation of corrupted information in a flat fading channel along with noise. Simulation results predicts theoretical predictions are strongly validated for different values of SNR and users.

Author 1: Khalid Mahmood

Keywords: Channel estimation; MIMO-CDMA; channel estimator; MMSE; Rayleigh fading channel; SNR

Download PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. Registered in England and Wales. Company Number 8933205. All rights reserved. thesai.org