The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 7 Issue 4

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Novel Altered Region for Biomarker Discovery in Hepatocellular Carcinoma (HCC) Using Whole Genome SNP Array

Abstract: cancer represents one of the greatest medical causes of mortality. The majority of Hepatocellular carcinoma arises from the accumulation of genetic abnormalities, and possibly induced by exterior etiological factors especially HCV and HBV infections. There is a need for new tools to analysis the large sum of data to present relevant genetic changes that may be critical for both understanding how cancers develop and determining how they could ultimately be treated. Gene expression profiling may lead to new biomarkers that may help develop diagnostic accuracy for detecting Hepatocellular carcinoma. In this work, statistical technique (discrete stationary wavelet transform) for detection of copy number alternations to analysis high-density single-nucleotide polymorphism array of 30 cell lines on specific chromosomes, which are frequently detected in Hepatocellular carcinoma have been proposed. The results demonstrate the feasibility of whole-genome fine mapping of copy number alternations via high-density single-nucleotide polymorphism genotyping, Results revealed that a novel altered chromosomal region is discovered; region amplification (4q22.1) have been detected in 22 out of 30-Hepatocellular carcinoma cell lines (73%). This region strike, AFF1 and DSPP, tumor suppressor genes. This finding has not previously reported to be involved in liver carcinogenesis; it can be used to discover a new HCC biomarker, which helps in a better understanding of hepatocellular carcinoma.

Author 1: Esraa M. Hashem
Author 2: Mai S. Mabrouk
Author 3: Ayman M. Eldeib

Keywords: Hepatocellular carcinoma; copy number alternation; biomarkers; single-nucleotide polymorphism

PDF

Paper 2: A Framework for Satellite Image Enhancement Using Quantum Genetic and Weighted IHS+Wavelet Fusion Method

Abstract: this paper examined the applicability of quantum genetic algorithms to solve optimization problems posed by satellite image enhancement techniques, particularly super-resolution, and fusion. We introduce a framework starting from reconstructing the higher-resolution panchromatic image by using the subpixel-shifts between a set of lower-resolution images (registration), then interpolation, restoration, till using the higher-resolution image in pan-sharpening a multispectral image by weighted IHS+Wavelet fusion technique. For successful super-resolution, accurate image registration should be achieved by optimal estimation of subpixel-shifts. Optimal-parameters blind restoration and interpolation should be performed for the optimal quality higher-resolution image. There is a trade-off between spatial and spectral enhancement in image fusion; it is difficult for the existing methods to do the best in both aspects. The objective here is to achieve all combined requirements with optimal fusion weights, and use the parameters constraints to direct the optimization process. QGA is used to estimate the optimal parameters needed for each mathematic model in this framework “Super-resolution and fusion.” The simulation results show that the QGA-based method can be used successfully to estimate automatically the approaching parameters which need the maximal accuracy, and achieve higher quality and efficient convergence rate more than the corresponding conventional GA-based and the classic computational methods.

Author 1: Amal A. HAMED
Author 2: Osama A. OMER
Author 3: Usama S. MOHAMED

Keywords: Quantum genetic algorithm (QGA); HIS; fusion; wavelet; registration; super-resolution

PDF

Paper 3: Improved Appliance Coordination Scheme with Waiting Time in Smart Grids

Abstract: Smart grids aim to merge the advances in communications and information technologies with traditional power grids. In smart grids, users can generate energy and sell it to the local utility supplier. The users can reduce energy consumption by shifting appliances’ start time to off-peak hours. Many researchers have proposed techniques to reduce the previous issue for home appliances, such as the Appliances Coordination (ACORD) scheme and Appliances Coordination with Feed In (ACORD-FI) scheme. The goal of this work is to introduce an efficient scheme to reduce the total cost of energy bills by utilizing the ACORD-FI scheme to obtain an effective solution. In this work three scheduling schemes are proposed: the Appliances Coordination by Giving Waiting Time (ACORD-WT), the Appliances Coordination by Giving Priority (ACORD-P), and using photovoltaic (PV) with priority and waiting time scheduling algorithms. A simulator written in C++ is used to test the performance of the proposed schemes using. The performance metric used is the total savings in the cost of the energy bill in dollars. The first comparison for the proposed schemes with the ACORD-FI, and the results show that the efficiency of the proposed ACORD-WT is better than the ACORD-FI, regardless of the number of appliances. Moreover, the proposed ACORD-P, is also better than the standard ACORD-FI.

Author 1: Firas A. Al Balas
Author 2: Wail Mardini
Author 3: Yaser Khamayseh
Author 4: Dua’a Ah.K.Bani-Salameh

Keywords: smart grids; energy bill; off-peak

PDF

Paper 4: Network Attack Classification and Recognition Using HMM and Improved Evidence Theory

Abstract: In this paper, a decision model of fusion classification based on HMM-DS is proposed, and the training and recognition methods of the model are given. As the pure HMM classifier can’t have an ideal balance between each model with a strong ability to identify its target and the maximum difference between models. So in this paper, the results of HMM are integrated into the DS framework, and HMM provides state probabilities for DS. The output of each hidden Markov model is used as a body of evidence. The improved evidence theory method is proposed to fuse the results and encounter drawbacks of the pure HMM for improving classification accuracy of the system. We compare our approach with the traditional evidence theory method, other representative improved DS methods, pure HMM method and common classification methods. The experimental results show that our proposed method has a significant practical effect in improving the training process of network attack classification with high accuracy.

Author 1: Gang Luo
Author 2: Ya Wen
Author 3: Lingyun Xiang

Keywords: Hidden Markov Model; Evidence theory; Network attack; KDD CUP99; Classification

PDF

Paper 5: Cloud CRM: State-of-the-Art and Security Challenges

Abstract: Security undoubtedly play the main role of cloud CRM deployment, since the agile firms utilized cloud services in the providers infrastructures to perform acute CRM operations. In this paper researcher emphasis on the cloud CRM themes, security threads the most concern. Some aspects of security discussed concern on deployment the cloud CRM like: Access customers’ database and control; secure data transfer over the cloud; trust among the enterprise and cloud service provider; confidentiality, integrity, availability triad; and security hazard, future studies and practice are presented at the end.

Author 1: Amin Shaqrah

Keywords: Cloud computing; CRM; Security; Cloud Security

PDF

Paper 6: Improve Traffic Management in the Vehicular Ad Hoc Networks by Combining Ant Colony Algorithm and Fuzzy System

Abstract: Over the last years, total number of transporter has increased. High traffic leads to serious problems and finding a sensible solution to solve the traffic problem is a significant challenge. Also, the use of the full capacity of existing streets can help to solve this problem and reduce costs. Instead of using static algorithms, we present a new method, ACO algorithm, combine with fuzzy logic which is a fair solution to improve traffic management in the vehicular ad hoc networks. We have called this the Improved Traffic Management in the Vehicular ad hoc networks (ITMV). Proffer method combines the map segmentation and assign to one server, calculate the instantaneous state of the traffic in roads with use fuzzy logic and distribute traffic for reduce traffic as much as possible by less time priority rather than shorter route. This method collects the vehicles and streets information to calculate the instantaneous state of the vehicle density. The proposed method through simulations were compared with some existing methods and in terms of speed, travel time and reduce air pollution improved by an average of 36.5%, 38%, 29% ,Respectively.

Author 1: Fazlollah Khodadadi
Author 2: Seyed Javad Mirabedini
Author 3: Ali Harounabadi

Keywords: Traffic Management; Vehicular Ad-hoc Networks; Ant Colony Algorithms; Fuzzy System

PDF

Paper 7: An Approach of Self-Organizing Systems Based on Factor-Order Space

Abstract: To explore new system self-organizing theory, it’s urgent to find a new method in the system science. This paper combines factor space theory with system non-optimum theory, applies it into the research of system self-organizing theory and proposes new concepts as system factor space, object-factor and space-order relation. It constructs factor-space framework of system self-organizing based on factor mapping and object inversion, studies system ordering from a new perspective with optimum and non-optimum attributes as the basis of system uncertainty, and expands factor space theory from f(0,o) to f(o,0). The research suggests that the construction of system factor space is to build an information system capable of self-learning for system self-organization and better enhance functions of system self-organization by adopting information fusion of data analysis and perception judgment.

Author 1: Jin Li
Author 2: Ping He

Keywords: Self-organization; factor-order space; extended order; extended entropy; systems level

PDF

Paper 8: Incorporating Multiple Attributes in Social Networks to Enhance the Collaborative Filtering Recommendation Algorithm

Abstract: In view of the existing user similarity calculation principle of recommendation algorithm is single, and recommender system accuracy is not well, we propose a novel social multi-attribute collaborative filtering algorithm (SoMu). We first define the user attraction similarity by users’ historical rated behaviors using graph theory, and secondly, define the user interaction similarity by users’ social friendship which is based on the social relationship of being followed and following. Then, we combine the user attraction similarity and the user interaction similarity to obtain a multi-attribute comprehensive user similarity model. Finally, realize personalized recommendation according to the comprehensive similarity model. Experimental results on Douban and MovieLens show that the proposed algorithm successfully incorporates multiple attributes in social networks to recommendation algorithm, and improves the accuracy of recommender system with the improved comprehensive similarity computing model.

Author 1: Jian Yi
Author 2: Xiao Yunpeng
Author 3: Liu Yanbing

Keywords: Recommender System; Social Networks; Collaborative Filtering; Comprehensive Similarity

PDF

Paper 9: Using Rule Base System in Mobile Platform to Build Alert System for Evacuation and Guidance

Abstract: The last few years have witnessed the widespread use of mobile technology. Billions of citizens around the world own smartphones, which they use for both personal and business applications. Thus, technologies will minimize the risk of losing people's lives. Mobile platform is one of the most popular plat-form technologies utilized on a wide scale and accessible to a high number of people. There has been a huge increase in natural and manmade disasters in the last few years. Such disasters can hap-pen anytime and anywhere causing major damage to people and property. The environment affluence and the failure of people to go to other safe places are the results of catastrophic events re-cently in Jeddah city. Flood causes the sinking and destruction of homes and private properties. Thus, this paper describes a sys-tem that can help in determining the affected properties, evacuat-ing them, and providing a proper guidance to the registered users in the system. This system notifies mobile phone users by sending guidance messages and sound alerts, in a real-time when disasters (fires, floods) hit. Warnings and tips are received on the mobile user to teach him/her how to react before, during, and after the disaster. Provide a mobile application using GPS to determine the user location and guide the user for the best way with the aid of rule-based system that built through the interview with the Experts domains. Moreover, the user will re-ceive Google map updates for any added information. This sys-tem consists of two subsystems: the first helps students in our university to evacuate during a catastrophe and the second aids all people in the city. Due to all these features, the system can access the required information at the needed time.

Author 1: Maysoon Fouad Abulkhair
Author 2: Lamiaa Fattouh Ibrahim

Keywords: safety; Natural disasters; smartphone; rule-based system; Mobile Network; Smart Phone

PDF

Paper 10: Experimental Use of Kit-Build Concept Map System to Support Reading Comprehension of EFL in Comparing with Selective Underlining Strategy

Abstract: In this paper, we describe the effects of using Kit-Build concept mapping (KB-mapping) method as a technology-enhanced support for the Reading Comprehension (RC) in English as Foreign Language (EFL) contexts. RC is a process that helps learners to become a more effective and efficient reader. It is an intentional, active and interactive activity that language learners experience in their daily working activities. RC of EFL is a significant research area in technology-enhanced learning. In order to clarify the effect of KB-mapping method, we compared the results with that of selective underlining (SU) strategy through the Comprehension Test (CT) and the Delayed Comprehension Test (DCT) that performed two weeks later. As the results, it is clarified that there is a noticed difference in the DCT scores, while there is no significant difference in the CT scores. It indicates that the use of KB-mapping method helps learners to retain their information for longer period of time. By doing more statistical analysis for the results of the Kit- Build Conditions (KB-conditions) group and comparing them with the map scores, we found that the learners could answer 76% of the questions whose answers were included in their learner’s maps. It was found that learners could recall 86% of the questions and that their answers were included in their learner’s maps. It indicates that the use of KB-mapping method helps learners to retain and recall more information compared with the SU strategy even after two weeks elapsed. In a follow-up questionnaire after the end of all experiments, it was revealed that participants thought that using KB-mapping was similar to SU for the CT just after the use, but KB-mapping was more useful in remembering information after a while, and it was more difficult to carry out. Participants liked to use it in RC tasks, but asked for more time to do it.

Author 1: Mohammad ALKHATEEB
Author 2: Yusuke HAYASHI
Author 3: Taha RAJAB
Author 4: Tsukasa HIRASHIMA

Keywords: Technology-Enhanced Learning; Kit-Build Concept Map; Reading Comprehension; EFL

PDF

Paper 11: Regression Test-Selection Technique Using Component Model Based Modification: Code to Test Traceability

Abstract: Regression testing is a safeguarding procedure to validate and verify adapted software, and guarantee that no errors have emerged. However, regression testing is very costly when testers need to re-execute all the test cases against the modified software. This paper proposes a new approach in regression test selection domain. The approach is based on meta-models (test models and structured models) to decrease the number of test cases to be used in the regression testing process. The approach has been evaluated using three Java applications. To measure the effectiveness of the proposed approach, we compare the results using the re-test to all approaches. The results have shown that our approach reduces the size of test suite without negative impact on the effectiveness of the fault detection.

Author 1: Ahmad A. Saifan
Author 2: Mohammed Akour
Author 3: Iyad Alazzam
Author 4: Feras Hanandeh

Keywords: Regression Test; Regression Test selection technique; Meta-Model; Models Traceability

PDF

Paper 12: A Novel Method to Design S-Boxes Based on Key-Dependent Permutation Schemes and its Quality Analysis

Abstract: S-boxes are used in block ciphers as the important nonlinear components. The nonlinearity provides important protection against linear and differential cryptanalysis. The S-boxes used in encryption process could be chosen to be key-dependent. In this paper, we have presented four simple algorithms for generation key-dependent S-boxes. For quality analysis of the key-dependent S-boxes, we have proposed eight distance metrics. We have assumed the Matlab function “randperm” as standard of permutation and compared it with permutation possibilities of the proposed algorithms. In the second section we describe four algorithms, which generate key-dependent S-boxes. In the third section we analyze eight normalized distance metrics which we have used for evaluation of the quality of the key-dependent generation algorithms. Afterwards, we experimentally investigate the quality of the generated key-dependent S-boxes. Comparison results show that the key-dependent S-boxes have good quality and may be applied in cipher systems.

Author 1: Kazys Kazlauskas
Author 2: Robertas Smaliukas
Author 3: Gytis Vaicekauskas

Keywords: data encryption; substitution boxes; generation algorithms; distance metrics; quality analysis

PDF

Paper 13: Cultural Dimensions of Behaviors Towards E-Commerce in a Developing Country Context

Abstract: Customers prefer to shop online for various reasons such as saving time, better prices, convenience, selection, and availability of products and services. The accessibility and the ubiquitous nature of the Internet facilitate business beyond brick and mortar. The web-based businesses are required to understand the consumers’ expectations, attitudes, and behavior across the globe and take into consideration of cultural effects. Saudi Arabia has become a highly potential lucrative market for web-based companies. However, the growing number of Saudi Internet users has not become leading online shoppers. It is important for web based companies to identify the barriers that are causing Saudi users to stay away from online shopping mainstream. This led to understanding Saudi culture, expectations, behavior, and decision-making process to promote e-commerce. The purpose of this study is to investigate the effects of Saudi Arabian culture on the diffusion process of e-commerce. The study addresses the cultural differences, risk perceptions, and attitude by investigating Saudi people about shopping online. An empirical study was conducted to collect the data from Saudi users.

Author 1: Fahim Akhter

Keywords: Electronic Commerce; Security; Culture; Online Shopping; Privacy; Saudi Arabia

PDF

Paper 14: A New Network on Chip Design Dedicated to Multicast Service

Abstract: The qualities of service presented in the network on chip are considered as a network performance criteria. However, the implementation of a quality of service, such as multicasting, shows difficulties, especially at the algorithmic level. Numerous studies have tried to implement networks that support the multicast service by adopting various algorithms to maintain the network average latency acceptable. To evaluate these algorithms, their performances are compared with the algorithms based on the multi-unicast. As expected, there is always a performances improvement. Regrettably, there is a possible degradation of latency introduced by such a service because of the large occupation of the network bandwidth for some period (which depends on packet size). In this paper, we propose an architectural solution aiming to avoid this possible degradation.

Author 1: Mohamed Fehmi Chatmen
Author 2: Adel Baganne
Author 3: Rached Tourki

Keywords: Network-on-Chip (NoC); adaptive routing; Quality of service; Multicast

PDF

Paper 15: New Artificial Immune System Approach Based on Monoclonal Principle for Job Recommendation

Abstract: Finding the best solution for an optimization problem is a tedious task, specifically in the presence of enormously represented features. When we handle a problem such as job recommendations that have a diversity of their features, we should rely to metaheuristics. For example, the Artificial Immune System which is a novel computational intelligence paradigm achieving diversification and exploration of the search space as well as exploitation of the good solutions were reached in reasonable time. Unfortunately, in problems with diversity nature such job recommendation, it produces a huge number of antibodies that causes a large number of matching processes affect the system efficiency. To leverage this issue, we present a new intelligence algorithm inspired by immunology based on monoclonal antibodies production principle that, up to our knowledge, has never applied in science and engineering problems. The proposed algorithm recommends ranked list of best applicants for a certain job. We discussed the design issues, as well as the immune system processes that should be applied to the problem. Finally, the experiments are conducted that shown an excellence of our approach.

Author 1: Shaha Al-Otaibi
Author 2: Mourad Ykhlef

Keywords: content-based filtering; computational intelligence; artificial immune system; clonal selection; monoclonal antibodies

PDF

Paper 16: A New Method for Text Hiding in the Image by Using LSB

Abstract: an important topic in the exchange of confidential messages over the internet is the security of information conveyance. For instance, the producers and consumers of digital products are keen to know that their products are authentic and can be differentiated from those that are invalid. The science of encryption is the art of embedding data in audio files, images, videos or content in a way that would meet the above security needs. Steganography is a branch of data-hiding science which aims to reach a desirable level of security in the exchange of private military and commercial data which is not clear. These approaches can be used as complementary methods of encryption in the exchange of private data.

Author 1: Reza tavoli
Author 2: Maryam bakhshi
Author 3: Fatemeh salehian

Keywords: Hiding text inside an image; image processing; Steganography; image compression; LSB

PDF

Paper 17: A New CAD System for Breast Microcalcifications Diagnosis

Abstract: Breast cancer is one of the most deadly cancers in the world, especially among women. With no identified causes and absence of effective treatment, early detection remains necessary to limit the damages and provide possible cure. Submitting women with family antecedent to mammography periodically can provide an early diagnosis of breast tumors. Computer Aided Diagnosis (CAD) is a powerful tool that can help radiologists improving their diagnostic accuracy at earlier stages. Several works have been developed in order to analyze digital mammographies, detect possible lesions (especially masses and microcalcifications) and evaluate their malignancy. In this paper a new approach of breast microcalcifications diagnosis on digital mammograms is introduced. The proposed approach begins with a preprocessing procedure aiming artifacts and pectoral muscle removal based on morphologic operators and contrast enhancement based on galactophorous tree interpolation. The second step of the proposed CAD system consists on segmenting microcalcifications clusters, using Generalized Gaussian Density (GGD) estimation and a Bayesian back-propagation neural network. The last step is microcalcifications characterization using morphologic features which are used to feed a neuro-fuzzy system to classify the detected breast microcalcifications into benign and malignant classes.

Author 1: H. Boulehmi
Author 2: H. Mahersia
Author 3: K. Hamrouni

Keywords: Artifacts and pectoral muscle removal; Bayesian back-propagation neural network; Breast microcalcifications; CAD system; Digital mammograms; Galactophorous tree interpolation; GGD estimation; Morphologic features; Neuro-fuzzy system

PDF

Paper 18: Secure High Dynamic Range Images

Abstract: In this paper, a tone mapping algorithm is proposed to produce LDR (Limited Dynamic Range) images from HDR (High Dynamic Range) images. In the approach, non-linear functions are applied to compress the dynamic range of HDR images. Security tools will be then applied to the resulting LDR images and their effectiveness will be tested on the reconstructed HDR images. Three specific examples of security tools are described in more details: integrity verification using hash function to compute local digital signatures, encryption for confidentiality, and scrambling technique.

Author 1: Med Amine Touil
Author 2: Noureddine Ellouze

Keywords: high dynamic range; tone mapping; range compression; integrity verification; encryption; scrambling; inverse tone mapping; range expansion

PDF

Paper 19: A Subset Feature Elimination Mechanism for Intrusion Detection System

Abstract: several studies have suggested that by selecting relevant features for intrusion detection system, it is possible to considerably improve the detection accuracy and performance of the detection engine. Nowadays with the emergence of new technologies such as Cloud Computing or Big Data, large amount of network traffic are generated and the intrusion detection system must dynamically collected and analyzed the data produce by the incoming traffic. However in a large dataset not all features contribute to represent the traffic, therefore reducing and selecting a number of adequate features may improve the speed and accuracy of the intrusion detection system. In this study, a feature selection mechanism has been proposed which aims to eliminate non-relevant features as well as identify the features which will contribute to improve the detection rate, based on the score each features have established during the selection process. To achieve that objective, a recursive feature elimination process was employed and associated with a decision tree based classifier and later on, the suitable relevant features were identified. This approach was applied on the NSL-KDD dataset which is an improved version of the previous KDD 1999 Dataset, scikit-learn that is a machine learning library written in python was used in this paper. Using this approach, relevant features were identified inside the dataset and the accuracy rate was improved. These results lend to support the idea that features selection improve significantly the classifier performance. Understanding the factors that help identify relevant features will allow the design of a better intrusion detection system.

Author 1: Herve Nkiama
Author 2: Syed Zainudeen Mohd Said
Author 3: Muhammad Saidu

Keywords: classification; decision tree; features selection; intrusion detection system; NSL-KDD; scikit-learn

PDF

Paper 20: Systematic Evaluation of Social Recommendation Systems: Challenges and Future

Abstract: The issue of information overload could be effectively managed with the help of intelligent system which is capable of proactively supervising the users in accessing relevant or useful information in a tailored way, by pruning the large space of possible options. But the key challenge lies in what all information can be collected and assimilated to make effective recommendations. This paper discusses reasons for evolution of recommender systems leading to transition from traditional to social information based recommendations. Social Recommender System (SRS) exploits social contextual information in the form of social links of users, social tags, user-generated data that contain huge supplemental information about items or services that are expected to be of interest of user or about features of items. Therefore, having tremendous potential for improving recommendation quality. Systematic literature review has been done for SRS by categorizing various kinds of social-contextual information into explicit and implicit user-item information. This paper also analyses key aspects of any generic recommender system namely Domain, Personalization Levels, Privacy and Trustworthiness, Recommender algorithms to give a better understanding to researchers new in this field.

Author 1: Priyanka Rastogi
Author 2: Dr. Vijendra Singh

Keywords: Social Recommender System; Social Tagging; Social Contextual Information

PDF

Paper 21: Novel Approach to Estimate Missing Data Using Spatio-Temporal Estimation Method

Abstract: With advancement of wireless technology and the processing power in mobile devices, every handheld device supports numerous video streaming applications. Generally, user datagram protocol (UDP) is used in video transmission technology which does not provide assured quality of service (QoS). Therefore, there is need for video post processing modules for error concealments. In this paper we propose one such algorithm to recover multiple lost blocks of data in video. The proposed algorithm is based on a combination of wavelet transform and spatio-temporal data estimation. We decomposed the frame with lost blocks using wavelet transform in low and high frequency bands. Then the approximate information (low frequency) of missing block is estimated using spatial smoothening and the details (high frequency) are added using bidirectional (temporal) predication of high frequency wavelet coefficients. Finally inverse wavelet transform is applied on modified wavelet coefficients to recover the frame. In proposed algorithm, we carry out an automatic estimation of missing block using spatio-temporal manner. Experiments are carried with different YUV and compressed domain streams. The experimental results show enhancement in PSNR as well as visual quality and cross verified by video quality metrics (VQM).

Author 1: Aniruddha D. Shelotkar
Author 2: Dr. P. V. Ingole

Keywords: Error concealment; Wavelet Transform; Missing Data estimation

PDF

Paper 22: Improvement of Sample Selection: A Cascade-Based Approach for Lesion Automatic Detection

Abstract: Computer-Aided Detection (CADe) system has a significant role as a preventative effort in the early detection of breast cancer. There are some phases in developing the pattern recognition on the CADe system, including the availability of a large number of data, feature extraction, selection and use of features, and the selection of the appropriate classification method. Haar cascade classifier has been successfully developed to detect the faces in the multimedia image automatically and quickly. The success of the face detection system must not be separated from the availability of the training data in the large numbers. However, it is not easy to implement on a medical image because of some reasons, including its low quality, the very little gray-value differences, and the limited number of the patches for the examples of the positive data. Therefore, this research proposes an algorithm to overcome the limitation of the number of patches on the region of interest to detect whether the lesion exists or not on the mammogram images based on the Haar cascade classifier. This research uses the mammogram and ultrasonography images from the breast imaging of 60 probands and patients in the Clinic of Oncology, Yogyakarta. The testing of the CADe system is done by comparing the reading result of that system with the mammography reading result validated with the reading of the ultrasonography image by the Radiologist. The testing result of the k-fold cross validation demonstrates that the use of the algorithm for the multiplication of intersection rectangle may improve the system performance with accuracy, sensitivity, and specificity of 76%, 89%, and 63%, respectively.

Author 1: Shofwatul ‘Uyun
Author 2: M. Didik R Wahyudi
Author 3: Lina Choridah

Keywords: CADe system; haar; cascade classifier; mammogram

PDF

Paper 23: Security Risk Scoring Incorporating Computers' Environment

Abstract: A framework of a Continuous Monitoring System (CMS) is presented, having new improved capabilities. The system uses the actual real-time configuration of the system and environment characterized by a Configuration Management Data Base (CMDB) which includes detailed information of organizational database contents, security and privacy specifications. The Common Vulnerability Scoring Systems' (CVSS) algorithm produces risk scores incorporating information from the CMDB. By using the real updated environmental characteristics the system enables achieving accurate scores compared to existing practices. Framework presentation includes systems' design and an illustration of scoring computations.

Author 1: Eli Weintraub

Keywords: CVSS; Security; Risk Management; Configuration Management; CMDB; Continuous Monitoring System; Vulnerability

PDF

Paper 24: Scalable Hybrid Speech Codec for Voice over Internet Protocol Applications

Abstract: With the advent of various web-based applications and the fourth generation (4G) access technology, there has been an exponential growth in the demand of multimedia service delivery along with speech signals in a voice over internet protocol (VoIP) setup. Need is felt to fine-tune the conventional speech codecs deployed to cater to the modern environment. This fine-tuning can be achieved by further compressing the speech signal and to utilize the available bandwidth to deliver other services. This paper presents a scalable -hybrid model of speech codec using ITU-T G.729 and db10 wavelet. The codec addresses the problem of compression of speech signal in VoIP setup. The performance comparison of the codec with the standard codec has been performed by statistical analysis of subjective, objective and quantifiable parameters of quality desirable from the codec deployed in VoIP platforms.

Author 1: Manas Ray
Author 2: Mahesh Chandra
Author 3: B.P. Patil

Keywords: VoIP; Speech Compression; Hybrid Speech Codec; ITU-T G.729 codec; db10 wavelet; Statistical Analysis

PDF

Paper 25: Hyperspectral Image Classification Using Unsupervised Algorithms

Abstract: Hyperspectral Imaging (HSI) is a process that results in collected and processed information of the electromagnetic spectrum by a specific sensor device. It’s data provide a wealth of information. This data can be used to address a variety of problems in a number of applications. Hyperspectral Imaging classification assorts all pixels in a digital image into groups. In this paper, unsupervised hyperspectral image classification algorithms used to obtain a classified hyperspectral image. Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA) algorithm and K-Means algorithm are used. Applying two algorithms on Washington DC hyperspectral image, USA, using ENVI tool. In this paper, the performance was evaluated on the base of the accuracy assessment of the process after applying Principle Component Analysis (PCA) and K-Means or ISODATA algorithm. It is found that, ISODATA algorithm is more accurate than K-Means algorithm. Since The overall accuracy of classification process using K-Means algorithm is 78.3398% and The overall accuracy of classification process using ISODATA algorithm is 81.7696%. Also the processing time increased when the number of iterations increased to get the classified image.

Author 1: Sahar A. El_Rahman

Keywords: hyperspectral imaging; unsupervised classification; K-Means algorithm; ISODATA algorithm; ENVI

PDF

Paper 26: A Survey on Security for Smartphone Device

Abstract: The technological advancements in mobile connectivity services such as GPRS, GSM, 3G, 4G, Blue-tooth, WiMAX, and Wi-Fi made mobile phones a necessary component of our daily lives. Also, mobile phones have become smart which let the users perform routine tasks on the go. However, this rapid increase in technology and tremendous usage of the smartphones make them vulnerable to malware and other security breaching attacks. This diverse range of mobile connectivity services, device software platforms, and standards make it critical to look at the holistic picture of the current developments in smartphone security research. In this paper, our contribution is twofold. Firstly, we review the threats, vulnerabilities, attacks and their solutions over the period of 2010-2015 with a special focus on smartphones. Attacks are categorized into two types, i.e., old attack and new attacks. With this categorization, we aim to provide an easy and concise view of different attacks and the possible solutions to improve smartphone security. Secondly, we critically analyze our findings and estimate the market growth of different operating systems for the smartphone in coming years. Furthermore, we estimate the malware growth and forecast the world in 2020.

Author 1: Syed Farhan Alam Zaidi
Author 2: Munam Ali Shah
Author 3: Muhammad Kamran
Author 4: Qaisar Javaid
Author 5: Sijing Zhang

Keywords: Smartphone Security; Vulnerabilities; Attacks; Malware

PDF

Paper 27: Hex Symbols Algorithm for Anti-Forensic Artifacts on Android Devices

Abstract: Mobile phones technology has become one of the most common and important technologies that started as a communication tool and then evolved into key reservoirs of personal information and smart applications. With this increased level of complications, increased dangers and increased levels of countermeasures and opposing countermeasures have emerged, such as Mobile Forensics and anti-forensics. One of these anti-forensics tools is steganography, which introduced higher levels of complexity and security against hackers’ attacks but simultaneously create obstacles to forensic investigations. A new anti-forensics approach for embedding data in the steganography field is proposed in this paper. It is based on hiding secret data in hex symbols carrier files rather than the usually used file multimedia carrier including videos, image and sound files. Furthermore, this approach utilizes hexadecimal codes to embed the secret data on the contrary to the conventional steganography approaches which apply changes to binary codes. Accordingly, the resulting data in the carrier files will be unfathomable without the use of special keys yielding a high level of attacking and deciphering difficulty. Besides, embedding the secret data in the form of hex symbols, the agreed upon procedure between communicating parties follows a random embedding manner formulated using WinHex software. Files can be exchanged amongst android devices and/or computers. Experiments were conducted for applying the proposed algorithm on rooted android devices, which also are connected to computers. The proposed methods showed advantages over the currently existing steganography approaches, in terms of character frequency, capacity, security, and robustness.

Author 1: Somyia M. Abu Asbeh
Author 2: Sarah M. Hammoudeh
Author 3: Hamza A. Al-Sewadi
Author 4: Arab M. Hammoudeh

Keywords: Mobile Forensics; Anti-Forensics; Artifact Wiping; Data Hiding; Wicker; Steganography

PDF

Paper 28: Urdu to Punjabi Machine Translation: An Incremental Training Approach

Abstract: The statistical machine translation approach is highly popular in automatic translation research area and promising approach to yield good accuracy. Efforts have been made to develop Urdu to Punjabi statistical machine translation system. The system is based on an incremental training approach to train the statistical model. In place of the parallel sentences corpus has manually mapped phrases which were used to train the model. In preprocessing phase, various rules were used for tokenization and segmentation processes. Along with these rules, text classification system was implemented to classify input text to predefined classes and decoder translates given text according to selected domain by the text classifier. The system used Hidden Markov Model(HMM) for the learning process and Viterbi algorithm has been used for decoding. Experiment and evaluation have shown that simple statistical model like HMM yields good accuracy for a closely related language pair like Urdu-Punjabi. The system has achieved 0.86 BLEU score and in manual testing and got more than 85% accuracy.

Author 1: Umrinderpal Singh
Author 2: Vishal Goyal
Author 3: Gurpreet Singh Lehal

Keywords: Machine Translation; Urdu to Punjabi Machine Translation; NLP; Urdu; Punjabi; Indo-Aryan Languages

PDF

Paper 29: Holistic Evaluation Framework for Automated Bug Triage Systems: Integration of Developer Performance

Abstract: Bug Triage is an important aspect of Open Source Software Development. Automated Bug Triage system is essential to reduce the cost and effort incurred by manual Bug Triage. At present, the metrics that are available in the literature to evaluate the Automated Bug Triage System are only recommendation centric. These metrics address only the correctness and coverage of the Automated Bug Triage System. Thus, there is a need for user-centric evaluation of the Bug Triage System. The two types of metrics to evaluate the Automated Bug Triage System include Recommendation Metrics and User Metrics. There is a need to corroborate the results produced by the Recommendation Metrics with User Metrics. To this end, this paper furnishes a Holistic Evaluation Framework for Bug Triage System by integrating the developer performance into the evaluation framework. The Automated Bug Triage System is also to retrieve a set of developers for resolving a bug. Hence, this paper proposes Key Performance Indicators (KPI) for appraising a developer’s effectiveness in contribution towards the resolution of the bug. By applying the KPIs on the retrieved set of developers, the Bug Triage System can be evaluated quantitatively.

Author 1: Dr. V. Akila
Author 2: Dr.V.Govindasamy

Keywords: Bug Management; Bug Triage; Recommendation Metrics; Key Performance Indicators; Developer Performance; Bug Resolution Time

PDF

Paper 30: An Analysis on Host Vulnerability Evaluation of Modern Operating Systems

Abstract: Security is a major concern in all computing environments. One way to achieve security is to deploy a secure operating system (OS). A trusted OS can actually secure all the resources and can resist the vulnerabilities and attacks effectively. In this paper, our contribution is twofold. Firstly, we critically analyze the host vulnerabilities in modern desktop OSs. We group existing approaches and provide an easy and concise view of different security models adapted by most widely used OSs. The comparison of several OSs regarding structure, architecture, mode of working, and security models also form part of the paper. Secondly, we use the current usage statistics for Windows, Linux, and MAC OSs and predict their future. Our forecast will help the designers, developers and users of the different OSs to prepare for the upcoming years accordingly.

Author 1: Afifa Sajid
Author 2: Munam Ali Shah
Author 3: Muhammad Kamran
Author 4: Qaisar Javaid
Author 5: Sijing Zhang

Keywords: Security; Operating system; Virtualization; kernel; Reliability; Vulnerability evaluation

PDF

Paper 31: The Problem of Universal Grammar with Multiple Languages: Arabic, English, Russian as Case Study

Abstract: Every language has its characteristics and rules, though all languages share the same components like words, sentences, subject, verb, object and so on. Nevertheless, Chomsky suggested the theory of language acquisition in children instinctively through a universal grammar that represents a universal grammar for all human languages. Since it has its declaration, this theory has encountered criticism from linguists. In this paper, the criticism will be presented, and the conclusion that the general rule is not compatible with all human languages will be suggested by studying some human languages as a case namely Arabic, English, and Russian.

Author 1: Nabeel Imhammed Zanoon

Keywords: Chomsky; linguistic; Universal Grammar; Arabic; English; Russian

PDF

Paper 32: The Application of Fuzzy Control in Water Tank Level Using Arduino

Abstract: Fuzzy logic control has been successfully utilized in various industrial applications; it is generally used in complex control systems, such as chemical process control. Today, most of the fuzzy logic controls are still implemented on expensive high-performance processors. This paper analyzes the effectiveness of a fuzzy logic control using a low-cost controller applied to a water level control system. The paper also gives a low-cost hardware solution and practical procedure for system identification and control. First, the mathematical model of the process was obtained with the help of Matlab. Then two methods were used to control the system, PI (Proportional, Integral) and fuzzy control. Simulation and experimental results are presented.

Author 1: Fayçal CHABNI
Author 2: Rachid TALEB
Author 3: Abderrahmen BENBOUALI
Author 4: Mohammed Amin BOUTHIBA

Keywords: Fuzzy control; PI; PID; Arduino; System identification

PDF

Paper 33: A Comparative Study of Databases with Different Methods of Internal Data Management

Abstract: The purpose of this paper is to present a comparative study between a non-relational MongoDB database and a relational Microsoft SQL Server database in the case of an unstructured representation of data, in XML or JSON format. We mainly focus our presentation on exploring all the possibilities that each type of database offers us, in the case that the data, which has to be stored, cannot or is not wanted to be normalized. This is a scenario most often found in production when, for the application that is being developed we are extracting unstructured data from social networks or all kinds of different channels that the user might have. The comparative study is based on the creation of a benchmark application developed in C# using Visual Studio 2013, which accesses databases created beforehand with proper optimizations that will be described.

Author 1: Cornelia Gyorödi
Author 2: Robert Gyorödi
Author 3: Alexandra Stefan
Author 4: Livia Bandici

Keywords: MongoDB; Microsoft SQL Server; NoSQL; non-relational database

PDF

Paper 34: Improve Query Performance On Hierarchical Data. Adjacency List Model Vs. Nested Set Model

Abstract: Hierarchical data are found in a variety of database applications, including content management categories, forums, business organization charts, and product categories. In this paper, we will examine two models deal with hierarchical data in relational databases namely, adjacency list model and nested set model. We analysed these models by executing various operations and queries in a web-application for the management of categories, thus highlighting the results obtained during performance comparison tests. The purpose of this paper is to present the advantages and disadvantages of using an adjacency list model compared to nested set model in a relational database integrated into an application for the management of categories, which needs to manipulate a big amount of hierarchical data.

Author 1: Cornelia Gyorödi
Author 2: Romulus-Radu Moldovan-Duse
Author 3: Robert Gyorödi
Author 4: George Pecherle

Keywords: adjacency list model; nested set model; relational database; MSSQL 2014; hierarchical data

PDF

Paper 35: Content-Based Image Retrieval for Medical Applications with Flip-Invariant Consideration Using Low-Level Image Descriptors

Abstract: Content-Based Image Retrieval (CBIR) has recently become one of the most attractive things in medical image research. Most existing research on CBIR focused on low-level image descriptors in image retrieval, which sometimes weaken the retrieval accuracy since many relevant images are not retrieved. Limited number of researches consider flipping the image on different sides. In order to fill the knowledge gap, this research focuses on considering the flipped images in retrieval based on previously implemented system that uses low-level image descriptors. Some improvements are made on the system considering the flipped images by extracting the features of the main and flipped images. The final results showed that the proposed system outperforms the existing system. The system has proven as a powerful method in helping medical staff, physician decision makers, and students to get better results by giving wide range of needed images, and helps in reasoning and building better decisions.

Author 1: Qusai Q. Abuein
Author 2: Mohammed Q. Shatnawi
Author 3: Radwan Batiha
Author 4: Ahmad Al-Aiad
Author 5: Suzan Amareen

Keywords: CBIR; image retrieval; feature extraction; medical images; flipped images

PDF

Paper 36: A Study to Investigate State of Ethical Development in E-Learning

Abstract: Different researches evidenced that e-learning has provided more opportunities to behave unethically than in traditional learning. A descriptive quantitative enquiry-based study is performed to explore same issue in e-Learning environments. The factors required for ethical development of students were extracted from literature. Later, efforts are made to assess their significance and their status in e-Learning with 5-point Likert scale survey. The sample consisted of 47 teachers, 298 students, and 31 administrative staff of e-learning management being involved in e-Learning. The work also observed state of students on various ethical behaviors. The study emphasized that the physical presence of teacher, an ethically conducive institutional environment, and the involvement of the society members are among the main factors that help in the ethical development of a student which are missing in e-Learning. The results of the study showed that the moral behavior of e-Learners is at decline because of lack of these required factors in e-Learning. This work also suggested the need of a model indicating how these deficiencies can be addressed by the educational institutions for ethical development of higher education learners.

Author 1: AbdulHafeez Muhammad
Author 2: Mohd. Feham MD. Ghalib
Author 3: Farooq Ahmad
Author 4: Quadri N Naveed
Author 5: Asadullah Shah

Keywords: e-Learning; ethical development; ethics

PDF

Paper 37: Improved Tracking Using a Hybrid Optcial-Haptic Three-Dimensional Tracking System

Abstract: The aim of this paper is to asses to what extent an optical tracking system (OTS) used for position tracking in virtual reality can be improved by combining it with a human scale haptic device named Scalable-SPIDAR. The main advantage of the Scalable-SPIDAR haptic device is the fact it is unobtrusive and not dependent of free line-of-sight. Unfortunately, the accuracy of the Scalable-SPIDAR is affected by bad-tailored mechanical design. We explore to what extent the influence of these inaccuracies can be compensated by collecting precise information on the nonlinear error by using the OTS and applying support vector regression (SVR) for calibrating the haptic device reports. After calibration of the Scalable-SPIDAR we have found that the average error in position readings reduced from to 263.7240±75.6207 mm to 12.6045±8.4169 mm. These results encourage the development of a hybrid haptic-optical system for virtual reality applications where the haptic device acts as an auxiliary source of position information for the optical tracker.

Author 1: M’hamed Frad
Author 2: Hichem Maaref
Author 3: Samir Otman
Author 4: Abdellatif Mtibaa

Keywords: virtual reality; Scalable-SPIDAR; support vector regression; hybrid tracking system

PDF

Paper 38: Physiologically Motivated Feature Extraction for Robust Automatic Speech Recognition

Abstract: In this paper, a new method is presented to extract robust speech features in the presence of the external noise. The proposed method based on two-dimensional Gabor filters takes in account the spectro-temporal modulation frequencies and also limits the redundancy on the feature level. The performance of the proposed feature extraction method was evaluated on isolated speech words which are extracted from TIMIT corpus and corrupted by background noise. The evaluation results demonstrate that the proposed feature extraction method outperforms the classic methods such as Perceptual Linear Prediction, Linear Predictive Coding, Linear Prediction Cepstral coefficients and Mel Frequency Cepstral Coefficients.

Author 1: Ibrahim Missaoui
Author 2: Zied Lachiri

Keywords: Feature extraction; Two-dimensional Gabor filters; Noisy speech recognition

PDF

Paper 39: An Informational Model as a Guideline to Design Sustainable Green SLA (GSLA)

Abstract: Recently, Service Level Agreement (SLA) and green SLA (GSLA) becomes very important for both the service providers/vendors and as well as for the users/customers. There are many ways to inform users/customers about various services with its inherent execution functionalities and even non-functional/Quality of Service (QoS) aspects through SLAs. However, these basic SLAs actually do not cover eco-efficient green issues or IT ethics issues for sustainable development. That is why green SLA (GSLA) already came into play for achieving sustainability in the industry. Nevertheless, the current practice of GSLA in the industry do not respect the sustainability at all. GSLA defined as a formal agreement incorporating all the traditional commitments respecting some green computing parameters such as carbon footprint, energy consumption etc. Therefore, there are still gaps for achieving sustainability through existing GSLA. To reach the goal of achieving sustainability and getting more customers, many IT (Information Technology) and ICT (Information and Communication Technology) business are looking for a real GSLA which would meet the ecological, economical and ethical aspects (3Es) of sustainability. This research discovers the missing parameters and introduce new parameters under sustainability hoods. In addition, it defines GSLA of sustainability with new green performance indicators and their measurable units. It also discovers the management complexity of proposed new GSLA through designing a general informational model and identifies various new entities and their effects with other entities under three pillars of sustainability. The ICT engineer could use the informational model as a guideline to design a sustainable GSLA for the industry. Therefore, the proposed model could help different service providers/vendors to define their future business strategies for the upcoming sustainable society.

Author 1: Iqbal Ahmed
Author 2: Hiroshi Okumura
Author 3: Kohei Arai

Keywords: SLA; GSLA; Green Computing; Sustainability; IT ethics; Informational model

PDF

Paper 40: A Novel Approach to Detect Duplicate Code Blocks to Reduce Maintenance Effort

Abstract: It was found in many cases that a code might be a clone for one programmer but not the same for another one. This problem occurs because of inaccurate documentation. According to research, the maintainers are not aware of the original design and thus, face the difficulty of agreeing on the system’s components and their relations or understanding the work of the application. The problem also occurs because of the different team of development and maintenance resulting in more effort and time during maintenance. This paper proposes a novel approach to detect the clones at the programmer side such that if a particular code is a clone then it can be well documented. This approach will provide both the individual duplicate statements as well as the block in which they appear. The approach has been examined on seven open source systems.

Author 1: Sonam Gupta
Author 2: Dr. P. C Gupta

Keywords: Clones; Program Dependence Graph (PDG); Control Flow Graph (CFG); Abstract Syntax Tree (AST)

PDF

Paper 41: An Adaptive Key Exchange Procedure for VANET

Abstract: VANET is a promising technology for intelligent transport systems (ITS). It offers new opportunities aiming at improving the circulation of vehicles on the roads and improving road safety. However, vehicles are interconnected by wireless links and without using any infrastructure, which exposes the vehicular network to many attacks. This paper presents a new solution for the exchange of security keys to protect information exchanged between vehicles. In addition to securing the inter-vehicular communication, the proposed solution has considerably decreased the time for the exchange of keys, thus improving the performance of VANET.

Author 1: Hamza Toulni
Author 2: Mohcine Boudhane
Author 3: Benayad Nsiri
Author 4: Mounia Miyara

Keywords: ITS; Vehicular ad-hoc networks; public key exchange; Security

PDF

Paper 42: A New Particle Swarm Optimization Based Stock Market Prediction Technique

Abstract: Over the last years, the average person's interest in the stock market has grown dramatically. This demand has doubled with the advancement of technology that has opened in the International stock market, so that nowadays anybody can own stocks, and use many types of software to perform the aspired profit with minimum risk. Consequently, the analysis and prediction of future values and trends of the financial markets have got more attention, and due to large applications in different business transactions, stock market prediction has become a critical topic of research. In this paper, our earlier presented particle swarm optimization with center of mass technique (PSOCoM) is applied to the task of training an adaptive linear combiner to form a new stock market prediction model. This prediction model is used with some common indicators to maximize the return and minimize the risk for the stock market. The experimental results show that the proposed technique is superior than the other PSO based models according to the prediction accuracy.

Author 1: Essam El. Seidy

Keywords: Computational intelligence; Particle Swarm Optimization; Stock Market; Prediction

PDF

Paper 43: Devising a Secure Architecture of Internet of Everything (IoE) to Avoid the Data Exploitation in Cross Culture Communications

Abstract: The communication infrastructure among various interconnected devices has revolutionized the process of collecting and sharing information. This evolutionary paradigm of collecting, storing and analyzing data streams is called the Internet of Everything (IoE). The information exchange through IoE is fast and accurate but leaves security issues. The emergence of IoE has seen a drift from a single novel technology to several technological developments. Managing various technologies under one infrastructure is complex especially when a network is openly allowing nodes to access it. Access transition of infrastructures from closed networked environments to the public internets has raised security issues. The consistent growth in IoE technology is recognized as a bridge between physical, virtual and cross-cultural worlds. Modern enterprises are becoming reliant on interconnected wireless intelligent devices and this has put billions of user’s data in risk. The interference and intrusion in any infrastructure have opened the door of public safety concerns because this interception could compromise the user’s personal data as well as personal privacy. This research aims to adopt a holistic approach to devising a secure IoE architecture for cross-culture communication organizations, with attention paid to the various technological wearable devices, their security policies, communication protocols, data format and data encryption features to avoid the data exploitation. A systems methodology will be adopted with a view to developing a secure IoE model which provides for a generic implementation after analyzing the critical security features to minimize the risk of data exploitations. This would combine the ability of IoE to connect, communicate, and remotely manage an incalculable number of networked, automated devices with the security properties of authentication, availability, integrity and confidentiality on a configurable basis. This will help clarify issues currently present and narrow down security threats planning considerably.

Author 1: Asim Majeed
Author 2: Rehan Bhana
Author 3: Anwar Ul Haq
Author 4: Mike-Lloyd Williams

Keywords: privacy; privacy enhancing technology (PET); big data; information communication technology (ICT)

PDF

Paper 44: The Group Decision Support System to Evaluate the ICT Project Performance Using the Hybrid Method of AHP, TOPSIS and Copeland Score

Abstract: This paper proposed a concept of the Group Decision Support System (GDSS) to evaluate the performance of Information and Communications Technology (ICT) Projects in Indonesian regional government agencies to overcome any possible inconsistencies which may occur in a decision-making process. By considering the aspect of the applicable legislation, decision makers involved to provide an assessment and evaluation of the ICT project implementation in regional government agencies consisted of Executing Parties of Government Institutions, ICT Management Work Units, Business Process Owner Units, and Society, represented by DPRD (Regional People’s Representative Assembly). The contributions of those decision makers in the said model were in the form of preferences to evaluate the ICT project-related alternatives based on the predetermined criteria for the method of Multiple Criteria Decision Making (MCDM). This research presented a GDSS framework integrating the Methods of Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and Copeland Score. The AHP method was used to generate values for the criteria used as input in the calculation process of the TOPSIS method. Results of the TOPSIS calculation generated a project rank indicated by each decision maker, and to combine the different preferences of these decision makers, the Copeland Score method was used as one of the voting methods to determine the best project rank of all the ranks indicated by the desicion makers.

Author 1: Herri Setiawan
Author 2: Jazi Eko Istiyanto
Author 3: Retantyo Wardoyo
Author 4: Purwo Santoso

Keywords: GDSS; ICT; MCDM; AHP; TOPSIS; Copeland Score; Decision Maker

PDF

Paper 45: A Novel Efficient Forecasting of Stock Market Using Particle Swarm Optimization with Center of Mass Based Technique

Abstract: This paper develops an efficient forecasting model for various stock price indices based on the previously introduced particle swarm optimization with center mass (PSOCOM) technique. The structure used in the proposed prediction models is a simple linear combiner using (PSOCOM) by minimizing its mean square error (MSE) to evaluate the proposed model. The comparison with other models such as standard PSO, Genetic algorithm, Bacterial foraging optimization, and adaptive bacterial foraging optimization had been done. The experimental results show that PSOCOM algorithms are the best among other algorithms in terms of MSE and the accuracy of prediction for some stock price indices. Whereas, the proposed forecasting model gives accurate prediction for short- and long-term prediction. As a result, the proposed stock market prediction model is more efficient from the other compared models.

Author 1: Razan A. Jamous
Author 2: Essam El.Seidy
Author 3: Bayoumi Ibrahim Bayoum

Keywords: Stock market forecasting; particle swarm optimization; Bacterial foraging optimization; Adaptive bacterial foraging optimization; Genetic algorithm

PDF

Paper 46: Throughput Measurement Method Using Command Packets for Mobile Robot Teleoperation Via a Wireless Sensor Network

Abstract: We are working to develop an information gathering system comprising a mobile robot and a wireless sensor network (WSN) for use in post-disaster underground environments. In the proposed system, a mobile robot carries wireless sensor nodes and deploys them to construct a WSN in the environment, thus providing a wireless communication infrastructure for mobile robot teleoperation. An operator then controls the mobile robot remotely while monitoring end-to-end communication quality with the mobile robot. Measurement of communication quality on wireless LANs has been widely studied. However, a throughput measurement method has not been developed for assessing the usability of wireless mobile robot teleoperation. In particular, a measurement method is needed that can handle mobile robots as they move around an unknown environment. Accordingly, in this paper, we propose a method for measuring throughput as a measure of communication quality in a WSN for wireless teleoperation of mobile robots. The feasibility of the proposed method was evaluated and verified in in practical field test where an operator remotely controlled mobile robots using a WSN.

Author 1: Kei SAWAI
Author 2: Ju Peng
Author 3: Tsuyoshi Suzuki

Keywords: Wireless Sensor Networks; Rescue Robot Teleoperation; Communication Quality Measurement

PDF

Paper 47: User Interface Menu Design Performance and User Preferences: A Review and Ways Forward

Abstract: This review paper is about menus on web pages and applications and their positioning on the user screen. The paper aims to provide the reader with a succinct summary of the major research in this area along with an easy to read tabulation of the most important studies. Furthermore, the paper concludes with some suggestions for future research regarding how menus and their positioning on the screen could be improved. The two principal suggestions concern trying to use more qualitative methods for investigating the issues and to develop in the future more universally designed menus.

Author 1: Dr Pietro Murano
Author 2: Margrete Sander

Keywords: Menus; navigation of interfaces; universal design; research methods

PDF

Paper 48: Edge Detection with Neuro-Fuzzy Approach in Digital Synthesis Images

Abstract: This paper presents an enhanced Neuro-Fuzzy (NF) Approach of edge detection with an analysis of the characteristic of the method. The specificity of our method is an enhancement of the learning database of the diagonal edges compared to the original learning database. The original inspired NF edge detection model uses just one image learning database realized by Emin Yuksel. The tests are accomplished in synthesis images with a noised one of 20% of Gaussian noise.

Author 1: Fatma ZRIBI
Author 2: Noureddine ELLOUZE

Keywords: Neuro-Fuzzy; learning databases; Gaussian noise; synthesis images

PDF

Paper 49: A Proposed Multi Images Visible Watermarking Technique

Abstract: Visible watermarking techniques are proposed to secure digital data against unauthorized attacks. These techniques protect data from illegal access and use. In this work, a multi visible watermarking technique that allows embedding different types of markers into different types of background images has been proposed It also allows adding multiple markers on the same background image with different sizes, positions and opacity levels without any interference. The proposed technique improves the flexibility issues of visible watermarking and helps in increasing the security levels. A visible watermarking system is designed to implement the proposed technique. The system facilitates single and multiple watermarking as illustrated in the proposed technique. Experimental results indicate that the proposed technique applies visible watermarking successfully.

Author 1: Ruba G. Al-Zamil
Author 2: Safa’a N. Al-Haj Saleh

Keywords: Visible watermarking; multi-image watermarking; marker image; background image; image channels; opacity; Matlab

PDF

Paper 50: Miniaturized Meander Slot Antenna Tor RFID TAG with Dielectric Resonator at 60 Ghz

Abstract: Day after day, recent advances in millimeter wave communications have called for the development of compact and efficient antennas. Furthermore, the greatest challenges in this area is to get a good performance and a miniaturized antenna. In addition, the design of antenna, in the Silicon technology, is one of the key challenges. In this way, this work will focus on the design of the free 60 GHz band, high gain and high efficiency on-chip antenna meandering slots for transponder Radio Frequency Identification (RFID). Further, the stacked dielectric resonators (DRS) will be arranged above the power element of on-chip antenna with an excitation of coplanar waveguide (CPW). We will use the Scattering Bond-Graph formalism like a new technique to design these proposed antennas and we will use the microwave Studio CST software simulation to validate the results. We have miniaturized the proposed antenna after having such a number of iteration and by applying the Bond Graph methodology, and the size of the antenna is about 1.2 * 1.1 mm2.

Author 1: JMAL Sabri
Author 2: NECIBI Omrane
Author 3: TAGHOUTI Hichem
Author 4: MAMI Abdelkader
Author 5: GHARSALLAH Ali

Keywords: RFID TAG; Millimeter Wave Identification; Meander Slot Antenna; Dielectric Resonator Antenna (DRA); On-chip Antenna; Silicon; Scattering Matrix; Bond-Graph; Scattering Bond-Graph

PDF

Paper 51: Word Sense Disambiguation Approach for Arabic Text

Abstract: Word Sense Disambiguation (WSD) consists of identifying the correct sense of an ambiguous word occurring in a given context. Most of Arabic WSD systems are based generally on the information extracted from the local context of the word to be disambiguated. This information is not usually sufficient for a best disambiguation. To overcome this limit, we propose an approach that takes into consideration, in addition to the local context, the global context too extracted from the full text. More particularly, the sense attributed to an ambiguous word is the one of which semantic proximity is more close both to its local and global context. The experiments show that the proposed system achieved an accuracy of 74%.

Author 1: Nadia Bouhriz
Author 2: Faouzia Benabbou
Author 3: El Habib Ben Lahmar

Keywords: Word Sense Disambiguation; Arabic Text; local context; global context; Arabic WordNet; Semantic Similarity

PDF

Paper 52: A Format-Compliant Selective Encryption Scheme for Real-Time Video Streaming of the H.264/AVC

Abstract: H.264 video coding standard is one of the most promising techniques for the future video communications. In fact, it supports a broad range of applications. Accordingly, with the continuous promotion of multimedia services, H.264 has been widely used in real-world applications. A major concern in the design of H.264 encryption algorithms is how to achieve a sufficiently high security level, while maintaining the efficiency of the underlying compression process. In this paper a new selective encryption scheme for the H.264 standard is presented. The aim of this work is to study the security of the H.264 standard in order to propose the appropriate design of a hardware crypto-processor based on a stream cipher algorithm. Since the proposed cryptosystem is mainly dedicated to the multimedia applications, it provides multiple security levels in order to satisfy the requirements of various applications for different purposes while ensuring higher coding efficiency. Different performance analyses were made in order to evaluate the new encryption system. The experimental results showed the reliability and the robustness of the proposed technique.

Author 1: Fatma SBIAA
Author 2: Sonia KOTEL
Author 3: Medien ZEGHID
Author 4: Rached TOURKI
Author 5: Mohsen MACHHOUT
Author 6: Adel BAGANNE

Keywords: component; Video coding; Data encryption; Data compression; H.264/AVC

PDF

Paper 53: Off-Line Arabic (Indian) Numbers Recognition Using Expert System

Abstract: This paper proposes an effective approach to automatic recognition of printed Arabic numerals which are extracted from digital images. First, the input image is normalized and pre-processed to an acceptable form. From the preprocessed image, components of the words are segmented into individual objects representing different numbers. Second, the numerical recognition is performed using an expert system based on a set of if-else rules, where each set of rules represents the categorization of each number. Finally, rigorous experiments are carried out on 226 random Arabic numerals selected from 40 images of Iraqi car plate numbers. The proposed method attained an accuracy of 97%.

Author 1: Fahad Layth Malallah
Author 2: Mostafah Ghanem Saeed
Author 3: Maysoon M. Aziz
Author 4: Olasimbo Ayodeji Arigbabu
Author 5: Sharifah Mumtazah Syed Ahmad

Keywords: Arabic numeral character recognition; Image Processing; Pattern Recognition; Feature Extraction; Object Segmentation; Expert System

PDF

Paper 54: Spatiotemporal Context Modelling in Pervasive Context-Aware Computing Environment: A Logic Perspective

Abstract: Pervasive context-aware computing, is one of the topics that received particular attention from researchers. The context, itself is an important notion explored in many works discussing its: acquisition, definition, modelling, reasoning and more. Given the permanent evolution of context-aware systems, context modeling is still a complex task, due to the lack of an adequate, dynamic, formal and relevant context representation. This paper discusses various context modeling approaches and previous logic-based works. It also proposes a preliminary formal spatiotemporal context modelling based on first order logic, derived from the structure of natural languages.

Author 1: Darine Ameyed
Author 2: Moeiz Miraoui
Author 3: Chakib Tadj

Keywords: context modelling; logic; formal; pervasive system; context-aware system

PDF

Paper 55: Application of Data Warehouse in Real Life: State-of-the-art Survey from User Preferences’ Perspective

Abstract: In recent years, due to increase in data complexity and manageability issues, data warehousing has attracted a great deal of interest in real life applications especially in business, finance, healthcare and industries. As the importance of retrieving the information from knowledge-base cannot be denied, data warehousing is all about making the information available for decision making. Data warehouse is accepted as the heart of the latest decision support systems. Due to the eagerness of data warehouse in real life, the need for the design and implementation of data warehouse in different applications is becoming crucial. Information from operational data sources are integrated by data warehousing into a central repository to start the process of analysis and mining of integrated information and primarily used in strategic decision making by means of online analytical processing techniques (OLAP). Despite the applications of data warehousing techniques in number of areas, there is no comprehensive literature review for it. This survey paper is an effort to present the applications of data warehouse in real life. It focuses to help the scholars knowing the analysis of data warehouse applications in number of domains. This survey provides applications, case studies and analysis of data warehouse used in various domains based on user preferences.

Author 1: Muhammad Bilal Shahid
Author 2: Umber Sheikh
Author 3: Basit Raza
Author 4: Qaisar Javaid

Keywords: Data warehouse (DW); Data warehouse applications; Decision support systems; OLAP; Preference based

PDF

Paper 56: Improving and Extending Indoor Connectivity Using Relay Nodes for 60 GHz Applications

Abstract: a 60 GHz wireless system can provide very high data rates. However, it has a tremendous amount of both Free Space Path Loss (FSPL) and penetration loss. To mitigate these losses and extend the system range; we propose techniques for using relay nodes. The relay node has been positioned correctly in order to shorten the distance between a source and a destination, this gave a reduction in the FSPL value. In addition, the positioning of the relay node correctly gave an alternative Line of Sight (LoS) to overcome the penetration loss caused by human bodies. For the last challenge, the considerably short range of the wireless network in the 60 GHz band, the range has been extended by applying the multi-hop communication with the concept of relay nodes selection. The length of the room was doubled and still get the same losses as if there was no expansion. All three techniques were modeled inside ‘Wireless InSite’ by three scenarios. The first scenario was a conference room with no obstacles to focus on FSPL. In the second scenario, the same conference room was modeled but human bodies have been taken into consideration to check the penetration loss effect. The final scenario was the extended version of the first scenario to deal with the small range issue.

Author 1: Mohammad Alkhawatra
Author 2: Nidal Qasem

Keywords: 60 GHz; Indoor Wireless; Multi-hop; Relay; Relay Selection

PDF

Paper 57: Localization and Monitoringo of Public Transport Services Based on Zigbee

Abstract: Regular and systematic public transport is of great importance to all residents in any country, in the city and on commuter routes. In our environment, users of public transport can track the movement of vehicles with great difficulty, given that the current system does not meet the necessary criteria, and does not comply with the functioning of transport system. The aim of the final paper is to show the development of such a system using ZigBee and Arduino platforms. This paper shows an example of use the technologies mentioned above, their main advantages and disadvantages, with the emphasis on communication between the device and its smooth progress. In order to show the way in which the system could function, a simple mesh network was created, consisting of coordinator, routers for data distribution and end devices representing the vehicles. To view the results a web application was developed using open-source tool which is for display of the collected data on the movement of nodes in the network.

Author 1: Izet Jagodic
Author 2: Suad Kasapovic
Author 3: Amir Hadzimehmedovic
Author 4: Lejla Banjanovic-Mehmedovic

Keywords: Wireless mesh network; Zigbee; Xbee; microcontroller; web development; integration

PDF

Paper 58: An approach of inertia compensation based on electromagnetic induction in brake test

Abstract: This paper briefly introduced the operational principle of the brake test bench, and points out the shortcomings when controlling the current of brake test, which means the reference measuring data is instantaneous. Aimed at this deficiency, a current control model based on electromagnetic induction and DC voltage is proposed. On the principle of electromagnetic induction, continuous data and automatic processes are realized. It significantly minimized errors owing to instantaneous data, and maximized the accuracy of the brake test.

Author 1: Xiaowen Li
Author 2: Han Que

Keywords: Brake test; Electromagnetic induction; DC trans-former

PDF

Paper 59: A Frequency Based Hierarchical Fast Search Block Matching Algorithm for Fast Video Communication

Abstract: Numerous fast-search block motion estimation algorithms have been developed to circumvent the high computational cost required by the full-search algorithm. These techniques however often converge to a local minimum, which makes them subject to noise and matching errors. Hence, many spatial domain block matching algorithms have been developed in literature. These algorithms exploit the high correlation that exists between pixels inside each frame block. However, with the block transformed frequencies, block matching can be used to test the similarities between a subset of selected frequencies that correctly identify each block uniquely; therefore fewer comparisons are performed resulting in a considerable reduction in complexity. In this work, a two-level hierarchical fast search motion estimation algorithm is proposed in the frequency domain. This algorithm incorporates a novel search pattern at the top level of the hierarchy. The proposed hierarchical method for motion estimation not only produces consistent motion vectors within each large object, but also accurately estimates the motion of small objects with a substantial reduction in complexity when compared to other benchmark algorithms.

Author 1: Nijad Al-Najdawi
Author 2: Sara Tedmori
Author 3: Omar A. Alzubi
Author 4: Osama Dorgham
Author 5: Jafar A. Alzubi

Keywords: Video coding; Frequency domain; Motion estimation; Hierarchical search; Block matching; Communication.

PDF

Paper 60: A Survey On Interactivity in Topic Models

Abstract: Trying to make sense and gain deeper insight from large sets of data is becoming a task very central to computer science in general. Topic models, capable of uncovering the semantic themes pervading through large collections of documents, have seen a surge in popularity in recent years. However, topic models are high level statistical tools; their output is given in terms of probability distributions, suited neither for simple interpretation nor deep analysis. Interpreting the fitted topic models in an intuitive manner requires visual and interactive tools. Additionally, some measure of human interaction is typically required for refining the output offered by such models. In the research, this area remains relatively unexplored – only recently has this aspect been receiving more attention. In this paper, the literature is surveyed as it pertains to interactivity and visualisation within the context of topic models, with the goal of finding current research trends in this area.

Author 1: Patrik Ehrencrona Kjellin
Author 2: Yan Liu

Keywords: topic model; latent dirichlet allocation; LDA; interactive; visualisation; IVA; survey; review

PDF

Paper 61: Answer Extraction System Based on Latent Dirichlet Allocation

Abstract: Question Answering (QA) task is still an active area of research in information retrieval. A variety of methods which have been proposed in the literature during the last few decades to solve this task have achieved mixed success. However, such methods developed in the Arabic language are scarce and do not have a good performance record. This is due to the challenges of Arabic language. QA based on Frequently Asked Questions is an important branch of QA in which a question is answered based on pre-answered ones. In this paper, the aim is to build a question answering system that responds to a user inquiry based on pre-answered questions. The proposed approach is based on Latent Dirichlet Allocation. Firstly, the dataset, pairs of questions and associated answers, will be grouped into several clusters of related documents. Next, when a new question to be answered is posed to the system, it,therefore, starts to assign this question to its appropriate cluster, then, use a similarity measure to get the top ten closest possible answers. Preliminary results show that the proposed method is achieving a good level of performance.

Author 1: Mohammed A. S. Ali
Author 2: Sherif M. Abdou

Keywords: Question Answering; frequently asked questions; information retrieval; artificial intelligence;

PDF

Paper 62: Computational Intelligence Optimization Algorithm Based on Meta-heuristic Social-Spider: Case Study on CT Liver Tumor Diagnosis

Abstract: Feature selection is an importance step in classification phase and directly affects the classification performance. Feature selection algorithm explores the data to eliminate noisy, redundant, irrelevant data, and optimize the classification performance. This paper addresses a new subset feature selection performed by a new Social Spider Optimizer algorithm (SSOA) to find optimal regions of the complex search space through the interaction of individuals in the population. SSOA is a new natural meta-heuristic computation algorithm which mimics the behavior of cooperative social-spiders based on the biological laws of the cooperative colony. Different combinatorial set of feature extraction is obtained from different methods in order to keep and achieve optimal accuracy. Normalization function is applied to smooth features between [0,1] and decrease gap between features. SSOA based on feature selection and reduction compared with other methods over CT liver tumor dataset, the proposed approach proves better performance in both feature size reduction and classification accuracy. Improvements are observed consistently among 4 classification methods. A theoretical analysis that models the number of correctly classified data is proposed using Confusion Matrix, Precision, Recall, and Accuracy. The achieved accuracy is 99.27%, precision is 99.37%, and recall is 99.19%. The results show that, the mechanism of SSOA provides very good exploration, exploitation and local minima avoidance.

Author 1: Mohamed Abu ElSoud
Author 2: Ahmed M. Anter

Keywords: Liver; CT; Social-Spider Optimization; Metaheuristics; Support Vector Machine; Random Selection Features; Classification; Sequential Forward Floating Search; Optimization.

PDF

Paper 63: Containing a Confused Deputy on x86: A Survey of Privilege Escalation Mitigation Techniques

Abstract: The weak separation between user- and kernelspace in modern operating systems facilitates several forms of privilege escalation. This paper provides a survey of protection techniques, both cutting-edge and time-tested, used to prevent common privilege escalation attacks. The techniques are compared against each other in terms of their effectiveness, their performance impact, the complexity of their implementation, and their impact on diversification techniques such as ASLR. Overall the literature provides a litany of disjoint techniques, each of which trades some performance cost for effectiveness against a particular isolated threat. No single technique was found to effectively mitigate all known and potential attack vectors with reasonable performance cost overhead.

Author 1: Scott Brookes
Author 2: Stephen Taylor

Keywords: Protection & Security; Virtualization; Kernel ROP; ret2usr; Kernel Code Implant; rootkits; Operating Systems; Privilege Escalation

PDF

Paper 64: Data Security, Privacy, Availability and Integrity in Cloud Computing: Issues and Current Solutions

Abstract: Cloud computing changed the world around us. Now people are moving their data to the cloud since data is getting bigger and needs to be accessible from many devices. Therefore, storing the data on the cloud becomes a norm. However, there are many issues that counter data stored in the cloud starting from virtual machine which is the mean to share resources in cloud and ending on cloud storage itself issues. In this paper, we present those issues that are preventing people from adopting the cloud and give a survey on solutions that have been done to minimize risks of these issues. For example, the data stored in the cloud needs to be confidential, preserving integrity and available. Moreover, sharing the data stored in the cloud among many users is still an issue since the cloud service provider is untrustworthy to manage authentication and authorization. In this paper, we list issues related to data stored in cloud storage and solutions to those issues which differ from other papers which focus on cloud as general.

Author 1: Sultan Aldossary
Author 2: William Allen

Keywords: Data security; Data Confidentiality; Data Privacy; Cloud Computing; Cloud Security

PDF

Paper 65: Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

Abstract: In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO) algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM), the Power Model (POWM) and the Delayed S-Shaped Model (DSSM). In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.

Author 1: Alaa F. Sheta
Author 2: Amal Abdel-Raouf

Keywords: Software Reliability, Reliability Growth Models, Grey Wolf Optimizer, Exponential Model, Power Model, Delayed S-Shaped Model

PDF

Paper 66: Impact of IP Addresses Localization on the Internet Dynamics Measurement

Abstract: Many projects have sought to measure the dynamics of the Internet by using end-to-end measurement tools. The RADAR tool has been designed in this context. It consists in periodically tracing the routes from a monitor toward a set of destinations, IP addresses chosen randomly in the Internet. However, the localization of these destinations on the topology has a significant influence on the observed dynamics. We study the dynamics observed when the destinations are localized at a country scale. We show that this localization may lead to observe a different dynamics. The local dynamics observed in our case is mainly a routing dynamics whereas the load balancing dominates the entire Internet dynamics.

Author 1: Tounwendyam Frederic Ouedraogo
Author 2: Tonguim Ferdinand Guinko

Keywords: Networks; Internet; Dynamics; Measurement; Localization

PDF

Paper 67: Improving Credit Scorecard Modeling Through Applying Text Analysis

Abstract: In the credit card scoring and loans management, the prediction of the applicant’s future behavior is an important decision support tool and a key factor in reducing the risk of Loan Default. A lot of data mining and classification approaches have been developed for the credit scoring purpose. For the best of our knowledge, building a credit scorecard by analyzing the textual data in the application form has not been explored so far. This paper proposes a comprehensive credit scorecard model technique that improves credit scorecard modeling though employing textual data analysis. This study uses a sample of loan application forms of a financial institution providing loan services in Yemen, which represents a real-world situation of the credit scoring and loan management. The sample contains a set of Arabic textual data attributes defining the applicants. The credit scoring model based on the text mining pre-processing and logistic regression techniques is proposed and evaluated through a comparison with a group of credit scorecard modeling techniques that use only the numeric attributes in the application form. The results show that adding the textual attributes analysis achieves higher classification effectiveness and outperforms the other traditional numerical data analysis techniques.

Author 1: Omar Ghailan
Author 2: Hoda M.O. Mokhtar
Author 3: Osman Hegazy

Keywords: Credit Scoring; Textual Data Analysis; Logistic Regression; Loan Default.

PDF

Paper 68: Iterative Threshold Decoding Of High Rates Quasi-Cyclic OSMLD Codes

Abstract: Majority logic decoding (MLD) codes are very powerful thanks to the simplicity of the decoder. Nevertheless, to find constructive families of these codes has been recognized to be a hard job. Also, the majority of known MLD codes are cyclic which are limited in the range of the rates. In this paper a new adaptation of the Iterative threshold decoding algorithm is considered, for decoding Quasi-Cyclic One Step Majority logic codes (QC-OSMLD) codes of high rates. We present the construction of QC-OSMLD codes based on Singer difference sets of rate 1/2, and codes of high rates based on Steiner triple system which allows to have a large choice of codes with different lengths and rates. The performances of this algorithm for decoding these codes on both Additive White Gaussian Noise (AWGN) channel and Rayleigh fading channel, to check its applicability in wireless environment, is investigated.

Author 1: Karim Rkizat
Author 2: Anouar Yatribi
Author 3: Mohammed Lahmer
Author 4: Mostafa Belkasmi

Keywords: Iterative threshold decoding; Quasi-Cyclic codes; OSMLD codes; Majority logic decoding; Steiner Triple System; BIBD

PDF

Paper 69: Multilingual Artificial Text Extraction and Script Identification from Video Images

Abstract: This work presents a system for extraction and script identification of multilingual artificial text appearing in video images. As opposed to most of the existing text extraction systems which target textual occurrences in a particular script or language, we have proposed a generic multilingual text extraction system that relies on a combination of unsupervised and supervised techniques. The unsupervised approach is based on application of image analysis techniques which exploit the contrast, alignment and geometrical properties of text and identify candidate text regions in an image. Potential text regions are then validated by an Artificial Neural Network (ANN) using a set of features computed from Gray Level Co-occurrence Matrices (GLCM). The script of the extracted text is finally identified using texture features based on Local Binary Patterns (LBP). The proposed system was evaluated on video images containing textual occurrences in five different languages including English, Urdu, Hindi, Chinese and Arabic. The promising results of the experimental evaluations validate the effectiveness of the proposed system for text extraction and script identification.

Author 1: Akhtar Jamil
Author 2: Azra Batool
Author 3: Zumra Malik
Author 4: Ali Mirza
Author 5: Imran Siddiqi

Keywords: Multilingual Text Detection; Video Images; Script Recognition; Artificial Neural Networks; Local Binary Patterns.

PDF

Paper 70: Exploring the Potential of Mobile Crowdsourcing in the Sharing of Information on Items Prices

Abstract: This article presents the result of a survey performed to identify the potential of using mobile crowdsourcing as means to exchange information on the prices of household items at local stores from the consumers point of view. The potential was identified from four perspectives; mobile devices capability, internet usage pattern, supporting infrastructure and readiness towards information sharing. Survey questionnaires comprising 18 quantitative questions were distributed to 138 respondents in the forms of hardcopy and online softcopy over a one month period in May 2014. Collected data were analysed using descriptive statistics and correlation analysis methods. Findings from the analyses showed that the potential of using mobile crowdsourcing in sharing information of item prices is high as seen from the perspectives of the mobile devices capability and supporting infrastructure. Internet usage pattern of the consumers as well as their attitude towards information sharing are also in support of the potential. To the best of our knowledge, this is the first study that gathered statistical data on the potential of using mobile crowdsourcing for the sharing of information on items prices. Potential is usually assumed based on informal observation on the prevalent of mobile devices and their widespread use, and are not supported by empirical data. It is of value to the broader research communities who are currently engaged in mobile crowdsourcing research for consumers benefits.

Author 1: Hazleen Aris
Author 2: Marina Md Din

Keywords: Mobile crowdsourcing; Price comparison; Crowdsourcing potential; Crowdsourcing survey

PDF

Paper 71: Genetic-Based Task Scheduling Algorithm in Cloud Computing Environment

Abstract: Nowadays, Cloud computing is widely used in companies and enterprises. However, there are some challenges in using Cloud computing. The main challenge is resource management, where Cloud computing provides IT resources (e.g., CPU, Memory, Network, Storage, etc.) based on virtualization concept and pay-as-you-go principle. The management of these resources has been a topic of much research. In this paper, a task scheduling algorithm based on Genetic Algorithm (GA) has been introduced for allocating and executing an application’s tasks. The aim of this proposed algorithm is to minimize the completion time and cost of tasks, and maximize resource utilization. The performance of this proposed algorithm has been evaluated using CloudSim toolkit.

Author 1: Safwat A. Hamad
Author 2: Fatma A. Omara

Keywords: Cloud computing; Task Scheduling; Genetic Algorithm; Optimization Algorithm

PDF

Paper 72: The Methodology for Ontology Development in Lesson Plan Domain

Abstract: Ontology has been recognized as a knowledge representation mechanism that supports a semantic web application. The semantic web application that supports lesson plan construction is crucial for teachers to deal with the massive information sources from various domains on the web. Thus, knowledge in lesson plan domain needs to be represented accordingly so that the search on the web will retrieve relevant materials only. Essentially, such retrieval needs an appropriate representation of the domain problem. The emergence of semantic web technology provides a promising solution to improve the representation, sharing, and re-use of information to support decision making. Thus, the knowledge of lesson plan domain needs to be represented ontologically to support efficient retrieval of semantic web application in the domain of lesson plan. This paper presents a new methodology for ontology development representation of lesson plan domain to support semantic web application. The methodology is focused on the important model, tools, and techniques in each phase of the development. The methodology consists of four phases, namely requirements analysis, development, implementation, evaluation and maintenance.

Author 1: Aslina Saad
Author 2: Shahnita Shaharin

Keywords: knowledge representation; methodology; ontology development; lesson plan

PDF

Paper 73: A Novel Broadcast Scheme DSR-based Mobile Adhoc Networks

Abstract: Traffic classification seeks to assign packet flows to an appropriate quality of service (QoS). Despite many studies that have placed a lot of emphasis on broadcast communication, broadcasting in MANETs is still a problematic issue. Due to the absence of the fixed infrastructure in MANETs, broadcast is an essential operation for all network nodes. Although the blind flooding is the simplest broadcasting technique, it is inefficient and lacks resource utilization efficiency. One of the proposed schemes to mitigate the blind flooding deficiency is the counter based broadcast scheme that depends on the number of received duplicate packets between the node and its neighbors, where the node compares the duplicate packet itself and each neighbor node that previously re-broadcasted a packet. Due to the fact that existing counter-based schemes are mainly based on the fixed counter based approach, these schemes are not efficient in different operating conditions. Thus, unlike existing studies, this paper proposes a dynamic counter based threshold value and examines its effectiveness under the Dynamic Source Routing Protocol (DSR) which is one of the well-known on-demand routing protocols. Specifically, we develop in this paper a new counter based broadcast algorithm under the umbrella of the DSR, namely, Inspired Counter Based Broadcasting (DSR-ICB). Using various simulation experiments, DSR-ICB has shown good performance especially in terms of delay and the number of redundant packets.

Author 1: Muneer Bani Yassein
Author 2: Ahmed Y. Al-Dubai

Keywords: a dynamic counter based; Broadcasting; DSR

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org