The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

ijacsa Volume 7 Issue 5

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: A Hybrid Method to Predict Success of Dental Implants

Abstract: Background/Objectives: The market demand for dental implants is growing at a significant pace. Results obtained from real cases shows that some dental implants do not lead to success. Hence, the main problem is whether machine learning techniques can be successful in prediction of success of dental implants. Methods/Statistical Analysis: This paper presents a combined predictive model to evaluate the success of dental implants. The classifiers used in this model are W-J48, SVM, Neural Network, K-NN and Naïve Bayes. All internal parameters of each classifier are optimized. These classifiers are combined in a way that results in the highest possible accuracies. Results: The performance of the proposed method is compared with single classifiers. Results of our study show that the combinative approach can achieve higher performance than the best of the single classifiers. Using the combinative approach improves the sensitivity indicator by up to 13.3%. Conclusion/Application: Since diagnosis of patients whose implant does not lead to success is very important in implant surgery, the presented model can help surgeons to make a more reliable decision on level of success of implant operation prior to surgery.

Author 1: Reyhaneh Sadat Moayeri
Author 2: Mehdi Khalili
Author 3: Mahsa Nazari

Keywords: Data Mining; Dental Implant; W-J48; Neural Network; K-NN; Naïve Bayes; SVM

PDF

Paper 2: ADBT Frame Work as a Testing Technique: An Improvement in Comparison with Traditional Model Based Testing

Abstract: Software testing is an embedded activity in all software development life cycle phases. Due to the difficulties and high costs of software testing, many testing techniques have been developed with the common goal of testing software in the most optimal and cost-effective manner. Model-based testing (MBT) is used to direct testing activities such as test verification and selection. MBT is employed to encapsulate and understand the behavior of the system under test, which supports and helps software engineers to validate the system with various likely actions. The widespread usage of models has influenced the usage of MBT in the testing process, especially with UML. In this research, we proposed an improved model based testing strategy, which involves and uses four different diagrams in the testing process. This paper also discusses and explains the activities in the proposed model with the finite state model (FSM). The comparisons have been done with traditional model based testings in terms of test case generation and result.

Author 1: Mohammed Akour
Author 2: Bouchaib Falah
Author 3: Karima Kaddouri

Keywords: Activity Diagram; Black Box Testing; Finite State Machine; Model-Based Testing; Software Testing; Test Suite; Test Case; Use Case Diagram

PDF

Paper 3: Classified Arabic Documents Using Semi-Supervised Technique

Abstract: In this work, we test the performance of the Naïve Bayes classifier in the categorization of Arabic text. Arabic is rich and unique in its own way and has its own distinct features. The issues and characteristics of Arabic language are addressed in our study and the classifier was modified and regulates to fit the needs of the language. a vector or word and their frequencies method is used to represent each document. We trained our classifier using both techniques supervised and semi-supervised in an attempt to compare between them and see if the classification accuracy will improve as a result of using the technique of semi-supervised. Many various experiments were performed, and the thoroughness of the classifier was measured using recall, precision, fallout and error. The outcomes illustrates that the semi-supervised learning can significantly enhance the classification accuracy of Arabic text.

Author 1: Dr. Khalaf Khatatneh

Keywords: Arabic Language; Naïve Bays; Classifier; Indexing; Stop word

PDF

Paper 4: Geographical Information System Based Approach to Monitor Epidemiological Disaster: 2011 Dengue Fever Outbreak in Punjab, Pakistan

Abstract: Epidemiological disaster management, using geo-informatics (GIS), is an innovative field of rapid information gathering. Dengue fever, a vector-borne disease, also known as break bone fever, is a lethal re-emerging arboviral disease. Its endemic flow is causing serious effects to the economy and health at the global level. Even now, many under-developed and developing countries like Pakistan lack the necessary GIS technologies to monitor such health issues. The aim of this study is to enhance the effectiveness of developing countries through disaster management capabilities by using state-of-the-art technologies, which provide the measures to relief the disaster burden on public sector agencies. In this paper, temporal changes and regional burden for distribution of this disease are mapped using GIS tools. For the prevention of disaster burden, these types of studies are widely used to provide an effective help and relief. This study concludes that a public sector institute can use such tools for surveillance purpose and to identify the risk areas for possible precautionary measures.

Author 1: Shahbaz Ahmad
Author 2: Muhammad Asif
Author 3: Muhammad Yasir
Author 4: Shahzad Nazir
Author 5: Muhammad Majid
Author 6: Muhammad Umar Chaudhry

Keywords: GIS; Dengue; Hemorrhagic fever; Aedes aegypti

PDF

Paper 5: Towards Face Recognition Using Eigenface

Abstract: This paper presents a face recognition system employing eigenface-based approach. The principal objective of this research is to extract feature vectors from images and to reduce the dimension of information. The method is implemented on frontal view facial images of persons to explore a two-dimensional representation of facial images. The system is organized with RMS (Root Mean Square) contrast scaling technique employed for pre-processing the images to adjust with poor lighting conditions. Experiments have been conducted using Carnegie Mellon University database of human faces and University of Essex Computer Vision Research Projects dataset. Experimental results indicate that the proposed eigenface-based approach can classify the faces with accuracy more than 80% in all cases.

Author 1: Md. Al-Amin Bhuiyan

Keywords: Eigenvector; Eigenface; RMS Contrast Scaling; Face Recognition

PDF

Paper 6: Empirical Analysis of Metrics Using UML Class Diagram

Abstract: Lots of Organizations before they are setup survey the maintainability of programming frameworks. To give quality program design there exists a critical strategy called Object-Oriented Framework. Object–Oriented estimations might be utilized to study the judgment skills suite of a class diagram’s structure particularly programming valuations and how the models have been developed and portrayed. The UML Class Diagram metrics maintain the Object-Oriented software. It is maintained through the investigation of the association among object oriented metrics and maintainability. This paper shows the effects of a scientific evaluation of software maintainability forecast and metrics. The research aims at the software quality attribute of maintainability as opposite to the method of software maintenance. It also aims to find out the vital correlation between structural complexity metrics and maintenance time. Several investigators have done copious in it, got lots of theoretical outcomes, and subsequently established a chain of practical uses. Due to dynamic changes in object-oriented technology, in today’s scenario the class diagram is an essential UML model, as, researcher must first get to know the use of software in a scientific manner. It is an affordable strategy which has had an exceptional result in recent times. This paper is related to UML class diagram metrics through which a way is provided to maintain UML class diagram complexity weights. UML Class diagram’s qualities will efficiently and technically show the complexity of Object -Oriented Software. A more specific research study has shown that the technique is associated with individual’s experience and also can be useful to improve software quality.

Author 1: Bhawana Mathur
Author 2: Manju Kaushik

Keywords: UML Class diagram; Maintainability; Object Oriented System; CK Metrics suite; Model; Software; UML

PDF

Paper 7: A Reduced Switch Voltage Stress Class E Power Amplifier Using Harmonic Control Network

Abstract: In this paper, a harmonic control network (HCN) is presented to reduce the voltage stress (maximum MOSFET voltage) of the class E power amplifier (PA). Effects of the HCN on the amplifier specifications are investigated. The results show that the proposed HCN affects several specifications of the amplifier, such as drain voltage, switch current, output power capability (Cp factor), and drain impedance. The output power capability of the presented amplifier is also improved, compared with the conventional class E structure. High-voltage stress limits the design specifications of the desired amplifier. Therefore, several limitations can be removed with the reduced switch voltage. According to the results, the maximum drain voltage for the presented amplifier is reduced and subsequently, the output power capability is increased about 25% using the presented structure. Zero-voltage switching condition (ZVS) and zero-voltage derivative switching condition (ZVDS) are assumed in the design procedure. These two conditions are essential for high efficiency achievement in various classes of switching amplifiers. A class E PA with operating frequency of 1 MHz is designed and simulated using advanced design system (ADS) and PSpice software. The theory and simulated results are in good agreement.

Author 1: Ali Reza Zirak
Author 2: Sobhan Roshani

Keywords: class E power amplifier; harmonic control network (HCN); MOSFET drain Impedance; ZVS and ZVDS conditions

PDF

Paper 8: Quizzes: Quiz Application Development Using Android-Based MIT APP Inventor Platform

Abstract: This work deals with the development of Android-based multiple-choice question examination system, namely: Quizzes. This application is developed for educational purposes, allowing the users to prepare the multiple choice questions for different examinations conducted on provincial and national level. The main goal of the application is to enable users to practice for subjective tests conducted for admissions and recruitment, with the focus on Computer Science field. This quiz application includes three main modules, namely (i) computer science, (ii) verbal, and (iii) analytical. The computer science and verbal modules contain various sub-categories. This quiz includes three functions: (i) Hint, (ii) Skip, and (iii) Pause/life-lines. These functions can be used only once by a user. It shows progress feedback during quiz play, and at the end, the app also shows the result.

Author 1: Muhammad Zubair Asghar
Author 2: Iqra Sana
Author 3: Khushboo Nasir
Author 4: Hina Iqbal
Author 5: Fazal Masud Kundi
Author 6: Sadia Ismail

Keywords: Quiz; Android; MIT App Inventor; Interviews and test preparation

PDF

Paper 9: Performance of Spectral Angle Mapper and Parallelepiped Classifiers in Agriculture Hyperspectral Image

Abstract: Hyperspectral Imaging (HSI) is used to provide a wealth of information which can be used to address a variety of problems in different applications. The main requirement in all applications is the classification of HSI data. In this paper, supervised HSI classification algorithms are used to extract agriculture areas that specialize in wheat growing and get a classified image. In particular, Parallelepiped and Spectral Angel Mapper (SAM) algorithms are used. They are implemented by a software tool used to analyse and process geospatial images that is an Environment of Visualizing Images (ENVI). They are applied on Al-Kharj, Saudi Arabia as the study area. The overall accuracy after applying the algorithms on the image of the study area for SAM classification was 66.67%, and 33.33% for Parallelepiped classification. Therefore, SAM algorithm has provided a better a study area image classification.

Author 1: Sahar A. El_Rahman

Keywords: Accuracy Assessment; ENVI; Hyperspectral Imaging; Parallelepiped Classifier; Spectral Angel Mapper; Supervised Classification

PDF

Paper 10: Identify and Manage the Software Requirements Volatility

Abstract: Management of software requirements volatility through development of life cycle is a very important stage. It helps the team to control significant impact all over the project (cost, time and effort), and also it keeps the project on track, to finally satisfy the user which is the main success criteria for the software project. In this research paper, we have analysed the root causes of requirements volatility through a proposed framework presenting the requirements volatility causes and how to manage requirements volatility during the software development life cycle. Our proposed framework identifies requirement error types, causes of requirements volatility and how to manage these volatilities to know the necessary changes and take the right decision according to volatility measurements (priorities, status and working hours). This framework contains four major phases (Elicitation and Analysis phase, Specification Validation phase, Requirements Volatility Causes phase and Changes Management phase). We will explain each phase in detail.

Author 1: Khloud Abd Elwahab
Author 2: Mahmoud Abd EL Latif
Author 3: Sherif Kholeif

Keywords: software requirements; requirement errors; requirements volatility; reason for requirement changes and control changes

PDF

Paper 11: Carbon Break Even Analysis: Environmental Impact of Tablets in Higher Education

Abstract: With the growing pace of tablets use and the large focus it is attracting especially in higher education, this paper looks at an important aspect of tablets; their carbon footprint. Studies have suggested that tablets have positive impact on the environment; especially since tablets use less energy than laptops or desktops. Recent manufacturers’ reports on the carbon footprint of tablets have revealed that a significant portion, as much as 80%, of the carbon footprint of tablets comes from production and delivery as opposed to the operational life-cycle of these devices. Thus rending some of previous assumptions about the environmental impact of tablets questionable. This study sets to answer a key question: What is the break-even analysis point when saving on printed paper offsets the carbon footprint of producing and running the tablet in higher education. A review of the literature indicated several examples of tablet models and their carbon emission impact; this is compared to the environmental savings on paper that green courses could produce. The analysis of the carbon break-even point shows that even when considering some of the most efficient and least carbon impact tablets available on the market with a carbon-footprint production of 153Kg CO2e, the break-even point is 81.5 months; referring to 6 years, 9 months and 15 days of use. This exceeds the life-cycle of an average tablet of five years and average degree duration of four years. While tablets still have the least carbon-footprint impact compared to laptops and desktops, to achieve the break-even point of carbon neutral operations this study concludes that manufacturers need to find more environmentally efficient ways of production that would reduce the carbon-footprint product to a level that does not exceed 112.8kg CO2e.

Author 1: Fadi Safieddine
Author 2: Imad Nakhoul

Keywords: Environmental, Tablet; Higher Education; Carbon-footprint; Break-even Analysis

PDF

Paper 12: Performance Analysis of Enhanced Interior Gateway Routing Protocol (EIGRP) Over Open Shortest Path First (OSPF) Protocol with Opnet

Abstract: Due to the increase in the easy accessibility of computers and mobile phones alike, routing has become indispensable in deciding how computes communicate especially modern computer communication networks. This paper presents performance analysis between EIGRP and OSPFP for real time applications using Optimized Network Engineering Tool (OPNET). In order to evaluate OSPF and EIGRP’s performance, three network models were designed where 1st, 2nd and 3rd network models are configured respectively with OSPF, EIGRP and a combination of EIGRP and OSPF. Evaluation of the proposed routing protocols was performed based on quantitative metrics such as Convergence Time, Jitter, End-to-End delay, Throughput and Packet Loss through the simulated network models. The evaluation results showed that EIGRP protocol provides a better performance than OSPF routing protocol for real time applications. By examining the results (convergence times in particular), the results of simulating the various scenarios identified the routing protocol with the best performance for a large, realistic and scalable network.

Author 1: Anibrika Bright Selorm Kodzo
Author 2: Mustapha Adamu Mohammed
Author 3: Ashigbi Franklin Degadzor
Author 4: Michael Asante

Keywords: Routing; Protocol; Algorithm; Throughput

PDF

Paper 13: Identify and Classify Critical Success Factor of Agile Software Development Methodology Using Mind Map

Abstract: Selecting the right method, right personnel and right practices, and applying them adequately, determine the success of software development. In this paper, a qualitative study is carried out among the critical factors of success from previous studies. The factors of success match with their relative principles to illustrate the most valuable factor for agile approach success, this paper also prove that the twelve principles poorly identified for few factors resulting from qualitative and quantitative past studies. Dimensions and Factors are presented using Critical success Dimensions and Factors Mind Map Model.

Author 1: Tasneem Abd El Hameed
Author 2: Mahmoud Abd EL Latif
Author 3: Sherif Kholief

Keywords: Agile success factor; Agile principles

PDF

Paper 14: An Enhanced Framework with Advanced Study to Incorporate the Searching of E-Commerce Products Using Modernization of Database Queries

Abstract: This study aims to inspect and evaluate the integration of database queries and their use in e-commerce product searches. It has been observed that e-commerce is one of the most prominent trends, which have been emerged in the business world, for the past decade. E-commerce has gained tremendous popularity, as it offers higher flexibility, cost efficiency, effectiveness, and convenience, to both, consumers and businesses. Large number of retailing companies has adopted this technology, in order to expand their operations, across of the globe; hence they needs to have highly responsive and integrated databases. In this regard, the approach of database queries is found to be the most appropriate and adequate techniques, as it simplifies the searches of e-commerce products.

Author 1: Mohd Muntjir
Author 2: Ahmad Tasnim Siddiqui

Keywords: E-Commerce; Database; Database; Queries; Integration; Database Queries

PDF

Paper 15: IAX-JINGLE Network Architectures Based-One/Two Translation Gateways

Abstract: Nowadays, Multimedia Communication has improved rapidly to allow people to communicate via the Internet. However, Internet users cannot communicate with each other unless they use the same chatting applications since each chatting application uses a certain signaling protocol to make the media call. The interworking module is a very critical issue since it solves the communication problems between any two protocols, and enables people around the world to make a voice/video call even if they use different chatting applications. Providing interoperability between different signaling protocols and multimedia applications takes the advantages of more than one protocol. Usually, each signaling protocol has its own messages which differ from other signaling protocol messages format. Thus, when two clients use different signaling protocols want to communicate phonetically, the sent/received messages between them will not be understood because the control and media packets in each protocol are different than the corresponding ones in the other protocol, The interworking module solves this kind of problem by matching the signals and media messages by providing translation gateways in the middle between the two protocols. Thus, many interworking modules have been proposed in order to enable many protocols’ users to chat with each other without any difficulties. This paper compares two interworking modules between Inter-Asterisk eXchange Protocol and Jingle Protocol. An experimental implementation in terms of session time is provided.

Author 1: Hadeel Saleh Haj Aliwi
Author 2: Putra Sumari

Keywords: media conferencing; VoIP; interworking; translation gateway; IAX; Jingle

PDF

Paper 16: Smoothness Measure for Image Fusion in Discrete Cosine Transform

Abstract: The aim of image fusion is to generate high-quality images using information from source images. The fused image contains more information than any of the source images. Image fusion using transforms is more effective than spatial methods. Statistical measures such as mean, contrast, and variance, are used in Discrete Cosine Transform (DCT) for image fusion. In this paper, we use statistical measures, such as the smoothness of a block in the transform domain, to select appropriate blocks from multiple images to obtain a fused image. Smoothness captures important blocks in images and duly eliminates noisy blocks. Furthermore, we compare and analyze all statistical measures in the DCT domain. Experimental results establish the superiority of our proposed method over state-of-the-art techniques for image fusion.

Author 1: Radhika Vadhi
Author 2: Veera Swamy Kilari
Author 3: Srinivas Kumar Samayamantula

Keywords: smoothness; statistical measures; DCT; image fusion

PDF

Paper 17: The Factors of Subjective Voice Disorder Using Integrated Method of Decision Tree and Multi-Layer Perceptron Artificial Neural Network Algorithm

Abstract: The aim of the present study was to develop a prediction model for subjective voice disorders based on an artificial neural network algorithm and a decision tree using national statistical data. Subjects of analysis were 8,713 adults over the age of 19 (3,801 males and 4,912 females) who completed the otolaryngological examination of the Korea National Health and Nutrition Examination Survey from 2010 to 2012. Explanatory variables included age, education level, income, occupation, problem drinking, coffee consumption, and pain and discomfort from disease over the last two weeks. A multi-layer perceptron artificial neural network and a decision tree model were used for the analysis. In this model, smoking, pain and discomfort from disease over the last two weeks, education level, occupation, and income were drawn out as major predictors of subjective voice disorders. In order to minimize the risk of dysphonia, it is necessary to establish a scientific management system for high-risk groups.

Author 1: Haewon Byeon
Author 2: Sunghyoun Cho

Keywords: Neural Networks; Subjective Voice Disorder; decision tree; risk factor; data-mining

PDF

Paper 18: SSH Honeypot: Building, Deploying and Analysis

Abstract: This article is set to discuss the various techniques that can be used while developing a honeypot, of any form, while considering the advantages and disadvantages of these very different methods. The foremost aims are to cover the principles of the Secure Shell (SSH), how it can be useful and more importantly, how attackers can gain access to a system by using it. The article involved the development of multiple low interaction honeypots. The low interaction honeypots that have been developed make use of the highly documented libssh and even editing the source code of an already available SSH daemon. Finally the aim is to combine the results with the vastly distributed Kippo honeypot, in order to be able to compare and contrast the results along with usability and necessity of particular features. Providing a clean and simple description for less knowledgeable users to be able to create and deploy a honeypot of production quality, adding security advantages to their network instantaneously.

Author 1: Harry Doubleday
Author 2: Leandros Maglaras
Author 3: Helge Janicke

Keywords: SSH Honeypot; Cyber Security

PDF

Paper 19: MMO: Multiply-Minus-One Rule for Detecting & Ranking Positive and Negative Opinion

Abstract: Hit and hot issue about reviews of any product is sentiment classification. Not only manufacturing company of the reviewed product takes decision about its quality, but the customers’ purchase of the product is also based on the reviews. Instead of reading all the reviews one by one, different works have been done to classify them as negative or positive with preprocessing. Suppose from 1000 reviews, there are 300 negative and 700 are positive. As a whole it is positive. Company and customer may not be satisfied with this sentiment orientation. For companies, negative reviews should be separated with respect to different aspects and features, so companies can enhance the features of the product. There is also a lot of work on aspect extraction, and then aspect based sentiment analysis. While on the other hand, users want the most positive reviews and the most negative reviews, then they can decide purchasing a certain product. To consider the issue from users’ perspective, authors suggest a method Multiply-Minus-One (MMO) which can evaluate each review and find scores based on positive, negative, intensifiers and negation words using WordNet Dictionary. Experiments on 4 types of datasets of product reviews show that this method can achieve 86%, 83%, 83% and 85% precision performance.

Author 1: Sheikh Muhammad Saqib
Author 2: Fazal Masud Kundi

Keywords: Sentiment Classification; Preprocessing; Text Mining; Sentiment Orientation

PDF

Paper 20: Improving Accelerometer-Based Activity Recognition by Using Ensemble of Classifiers

Abstract: In line with the increasing use of sensors and health application, there are huge efforts on processing of collected data to extract valuable information such as accelerometer data. This study will propose activity recognition model aim to detect the activities by employing ensemble of classifiers techniques using the Wireless Sensor Data Mining (WISDM). The model will recognize six activities namely walking, jogging, upstairs, downstairs, sitting, and standing. Many experiments are conducted to determine the best classifier combination for activity recognition. An improvement is observed in the performance when the classifiers are combined than when used individually. An ensemble model is built using AdaBoost in combination with decision tree algorithm C4.5. The model effectively enhances the performance with an accuracy level of 94.04 %.

Author 1: Tahani Daghistani
Author 2: Riyad Alshammari

Keywords: Activity Recognition; Sensors; Smart phones; accelerometer data; Data mining; Ensemble

PDF

Paper 21: A Multimodal Firefly Optimization Algorithm Based on Coulomb’s Law

Abstract: In this paper, a multimodal firefly algorithm named the CFA (Coulomb Firefly Algorithm) has been presented based on the Coulomb’s law. The algorithm is able to find more than one optimum solution in the problem search space without requiring any additional parameter. In this proposed method, less bright fireflies would be attracted to fireflies which are not only brighter, but according to the Coulomb’s law pose the highest gravity. Approaching the end of iteration, fireflies' motion steps are reduced which finally results in a more accurate result. With limited number of iterations, groups of fireflies gather around global and local optimal points. After the final iteration, the firefly which has the highest fitness value, would be survived and the rest would be omitted. Experiments and comparisons on the CFA algorithm show that the proposed method has successfully reacted in solving multimodal optimization problems.

Author 1: Taymaz Rahkar-Farshi
Author 2: Sara Behjat-Jamal

Keywords: Swarm Intelligence; multimodal firefly algorithm; multimodal optimization; firefly algorithm

PDF

Paper 22: The Impact of Privacy Concerns and Perceived Vulnerability to Risks on Users Privacy Protection Behaviors on SNS: A Structural Equation Model

Abstract: This research paper investigates Saudi users’ awareness levels about privacy policies in Social Networking Sites (SNSs), their privacy concerns and their privacy protection measures. For this purpose, a research model that consists of five main constructs namely information privacy concern, awareness level of privacy policies of social networking sites, perceived vulnerability to privacy risks, perceived response efficacy, and privacy protecting behavior was developed. An online survey questionnaire was used to collect responses from a sample of (108) Saudi SNSs users. The study found that Saudi users of social networking sites are concerned about their information privacy, but they do not have enough awareness of the importance of privacy protecting behaviors to safeguard their privacy online. The research results also showed that there is a lack of awareness of privacy policies of Social networking sites among Saudi users. Testing hypothesis results using the Structural Equation Modeling (SEM) showed that information privacy concern positively affects privacy protection behaviors in SNSs and perceived vulnerability to privacy risks positively affects information privacy concern.

Author 1: Noora Sami Al-Saqer
Author 2: Mohamed E. Seliaman

Keywords: Social networking sites (SNSs); information privacy concern; perceived vulnerability; SEM; protection behavior

PDF

Paper 23: Assessment Model for Language Learners’ Writing Practice (in Preparing for TOEFL iBT) Based on Comparing Structure, Vocabulary, and Identifying Discrepant Essays

Abstract: This study aims to investigate if learners of English can improve computer-assisted writing skills through the analysis of the data from the post test. In this study, the focus was given to intermediate-level students of English taking final writing tests (integrated and independent responses) in preparation for TOEFL iBT. We manually scored and categorized the students’ writing responses into five-point levels for the data to make the software. The results of the study showed that the model could be suitable for computerized scoring for language instructors to grade in a fair and exact way and for students to improve their writing performance through practice on the computer

Author 1: Duc Huu Pham
Author 2: Tu Ngoc Nguyen

Keywords: Computer-assisted writing skills; computerized scoring; integrated and independent responses; model; posttest

PDF

Paper 24: Parallel and Distributed Genetic Algorithm with Multiple-Objectives to Improve and Develop of Evolutionary Algorithm

Abstract: In this paper, we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increases exponentially and directly proportional to the size of the problem, The construction of timetable is most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems ,it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distributed models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.

Author 1: Khalil Ibrahim Mohammad Abuzanouneh

Keywords: Heterogeneous clusters; NP-hard; evolutionary multi-objective algorithm; parallel algorithms; Real-time scheduling

PDF

Paper 25: Gender Prediction for Expert Finding Task

Abstract: Predicting gender by names is one of the most interesting problems in the domain of Information Retrieval and expert finding task. In this research paper, we propose a machine learning approach for gender prediction task. We propose a new feature, that is, combination of letters in names which gives 86.54% accuracy. Our data collection consists of 3000 Urdu language names written using English Alphabets. This technique can be used to extract names from email addresses and hence is also valid for emails. To the best of our knowledge, it is the first- ever attempt for predicting gender from Pakistani (Urdu) names written using English alphabets.

Author 1: Daler Ali
Author 2: Malik Muhammad Saad Missen
Author 3: Nadeem Akhtar
Author 4: Nadeem Salamat
Author 5: Hina Asmat
Author 6: Amnah Firdous

Keywords: Urdu; Semantic Web; Gender Prediction; Expert Profiling; Machine Learning

PDF

Paper 26: A Robust Approach for Action Recognition Based on Spatio-Temporal Features in RGB-D Sequences

Abstract: Recognizing human action is attractive research topic in computer vision since it plays an important role on the applications such as human-computer interaction, intelligent surveillance, human actions retrieval system, health care, smart home, robotics and so on. The availability the low-cost Microsoft Kinect sensor, which can capture real-time high-resolution RGB and visual depth information, has opened an opportunity to significantly increase the capabilities of many automated vision based recognition tasks. In this paper, we propose new framework for action recognition in RGB-D video. We extract spatiotemporal features from RGB-D data that capture both visual, shape and motion information. Moreover, the segmentation technique is applied to present the temporal structure of action. Firstly, we use STIP to detect interest points both of RGB and depth channels. Secondly, we apply HOG3D descriptor for RGB channel and 3DS-HONV descriptor for depth channel. In addition, we also extract HOF2.5D from fusing RGB and Depth to capture human’s motion. Thirdly, we divide the video into segments and apply GMM to create feature vectors for each segment. So, we have three feature vectors (HOG3D, 3DS-HONV, and HOF2.5D) that represent for each segment. Next, the max pooling technique is applied to create a final vector for each descriptor. Then, we concatenate the feature vectors from the previous step into the final vector for action representation. Lastly, we use SVM method for classification step. We evaluated our proposed method on three benchmark datasets to demonstrate generalizability. And, the experimental results shown to be more accurate for action recognition compared to the previous works. We obtain overall accuracies of 93.5%, 99.16% and 89.38% with our proposed method on the UTKinect-Action, 3D Action Pairs and MSR-Daily Activity 3D dataset, respectively. These results show that our method is feasible and superior performance over the-state-of-the-art methods on these datasets.

Author 1: Ly Quoc Ngoc
Author 2: Vo Hoai Viet
Author 3: Tran Thai Son
Author 4: Pham Minh Hoang

Keywords: Action Recognition; Depth Sequences; GMM; SVM; Multiple Features; Spatio-Temporal Features

PDF

Paper 27: Using Business Intelligence Tools for Predictive Analytics in Healthcare System

Abstract: The scope of this article is to highlight how healthcare analytics can be improved using Business Intelligence tools. Healthcare system has learned from the previous lessons the necessity of using healthcare analytics for improving patient care, hospital administration, population growth and many others aspects. Business Intelligence solutions applied for the current analysis demonstrate the benefits brought by the new tools, such as SAP HANA, SAP Lumira, and SAP Predictive Analytics. In detailed is analyzed the birth rate with the contribution of different factors to the world.

Author 1: Mihaela-Laura IVAN
Author 2: Mircea Raducu TRIFU
Author 3: Manole VELICANU
Author 4: Cristian CIUREA

Keywords: Healthcare Analytics; Business Intelligence tools; SAP HANA; SAP Lumira; SAP Predictive Analytics; Birth Rate; Big Data

PDF

Paper 28: NISHA: Novel Interface for Smart Home Applications for Arabic Regionsubtitle as Needed

Abstract: Researchers have developed many devices and applications for smart homes to control home’s appliances. The main goal of this research is to propose a touch-based interface (namely, NISHA) for smart homes to meet user needs and requirements and is able to control any appliance in the house. This study is designed for people and circumstances in the Middle East countries (Jordan and West Bank) and therefore, is set out to design a user interface for smart home applications taking into consideration the economic, social, and technological differences. Referring to those differences, NISHA was designed in a classical representational design instead of a modern advanced one, based on virtual images instead of text, full control instead of automatic control, and very restrictive privacy issues for people of these countries still look at smart homes as a technology that threaten their privacy. Moreover, NISHA was tested and evaluated using heuristic and cognitive walk-through evaluation techniques. Evaluation results showed that 80% of users and experts were satisfied with NISHA as a user friendly interface, 90% of users were satisfied that NISHA met their expectations, and finally, 93% of users strongly asked to have NISHA in their daily lives.

Author 1: Muneer Bani Yassein
Author 2: Yaser Khamayseh
Author 3: Maryan Yatim

Keywords: Human Computer Interaction (HCI); HCI Design and evaluation methods; User Interface Design; User Centered Design; Smart Homes

PDF

Paper 29: SIP Signaling Implementations and Performance Enhancement over MANET: A Survey

Abstract: The implementation of the Session Initiation Protocol (SIP)-based Voice over Internet Protocol (VoIP) and multimedia over MANET is still a challenging issue. Many routing factors affect the performance of SIP signaling and the voice Quality of Service (QoS). Node mobility in MANET causes dynamic changes to route calculations, topology, hop numbers, and the connectivity status between the correspondent nodes. SIP-based VoIP depends on the caller’s registration, call initiation, and call termination processes. Therefore, the SIP signaling performance has an important role for the overall QoS of SIP-based VoIP applications for both IPv4 and IPv6 MANET. Different methods have been proposed to evaluate and benchmark the performance of the SIP signaling system. However, the efficiency of these methods vary and depend on the identified performance metrics and the implementation platforms. This survey examines the implementation of the SIP signaling system for VoIP applications over MANET and highlights the available performance enhancement methods.

Author 1: Mazin Alshamrani
Author 2: Haitham Cruickshank
Author 3: Zhili Sun
Author 4: Godwin Ansa
Author 5: Feda Alshahwan

Keywords: SIP; VoIP; MANET; Peer-to-Peer; Back-to-Back User Agent (B2BUA); IMS

PDF

Paper 30: An Efficient Audio Classification Approach Based on Support Vector Machines

Abstract: In order to achieve an audio classification aimed to identify the composer, the use of adequate and relevant features is important to improve performance especially when the classification algorithm is based on support vector machines. As opposed to conventional approaches that often use timbral features based on a time-frequency representation of the musical signal using constant window, this paper deals with a new audio classification method which improves the features extraction according the Constant Q Transform (CQT) approach and includes original audio features related to the musical context in which the notes appear. The enhancement done by this work is also lay on the proposal of an optimal features selection procedure which combines filter and wrapper strategies. Experimental results show the accuracy and efficiency of the adopted approach in the binary classification as well as in the multi-class classification.

Author 1: Lhoucine Bahatti
Author 2: Omar Bouattane
Author 3: My Elhoussine Echhibat
Author 4: Mohamed Hicham Zaggaf

Keywords: Classification; features; selection; timbre; SVM; IRMFSP; RFE-SVM; CQT

PDF

Paper 31: Educational Data Mining & Students’ Performance Prediction

Abstract: It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM) is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors.

Author 1: Amjad Abu Saa

Keywords: Data Mining; Education; Students; Performance; Patterns

PDF

Paper 32: Performance Evaluation of 802.11p-Based Ad Hoc Vehicle-to-Vehicle Communications for Usual Applications Under Realistic Urban Mobility

Abstract: In vehicular ad hoc networks, participating vehicles organize themselves in order to support lots of emerging applications. While network infrastructure can be dimensioned correctly in order to provide quality of service support to both vehicle-to-vehicle and vehicle-to-infrastructure communications, there are still many issues to achieve the same performance using only ad hoc vehicle-to-vehicle communications. This paper investigates the performance of such communications for complete applications including their specific packet size, packet acknowledgement mechanisms and quality of service requirements. The simulation experiments are performed using Riverbed (OPNET) Modeler on a network topology made of 50 nodes equipped with IEEE 802.11p technology and following realistic trajectories in the streets of Paris at authorized speeds. The results show that almost all application types are well supported, provided that the source and the destination have a direct link. Particularly, it is pointed out that introducing supplementary hops in a communication has more effects on end-to-end delay and loss rate rather than mobility of the nodes. The study also shows that ad hoc reactive routing protocols degrade performance by increasing the delays while proactive ones introduce the same counter performance by increasing the network load with routing traffic. Whatever the routing protocol adopted, the best performance is obtained only while small groups of nodes communicate using at most two-hop routes.

Author 1: Patrick Sondi
Author 2: Martine Wahl
Author 3: Lucas Rivoirard
Author 4: Ouafae Cohin

Keywords: V2V; 802.11p; QoS; Urban mobility; Simulation

PDF

Paper 33: An Efficient and Reliable Core-Assisted Multicast Routing Protocol in Mobile Ad-Hoc Network

Abstract: Mobile ad-hoc network is a collection of mobile nodes that are connected wirelessly forming random topology through decentralized administration. In Mobile ad-hoc networks, multicasting is one of the important mechanisms which can increase network efficiency and reliability by sending multiple copies in a single transmission without using several unicast transmissions. Receiver initiated mesh based multicasting approach provides reliability to Mobile ad-hoc network by reducing overhead. Receiver initiated mesh based multicast routing strongly relies on proper selection of a core node. The existing schemes suffer from two main problems. First, the core selection process is not efficient, that usually selects core in a manner that may decrease core lifetime and deteriorate network performance in the form of frequent core failures. Second, the existing schemes cause too much delay for core re-selection(s) process. The performance becomes worse in situations where frequent core failures occur due to high mobility which causes excessive flooding for reconfigurations of another core and hence delays the on-going communication and compromising the network reliability. The objectives of the paper are as follows. First, we propose an efficient method in which the core is selected within the receiver group on the basis of multiple parameters like battery capacity and location, as a result, a more stable core is selected with minimum core failure. Second, to increase the reliability and decrease the delay, we introduce the idea of the mirror core. The mirror core takes the responsibility as a main core after the failure of the primary core and has certain advantages such as maximum reliability, minimum delay and minimizing the data collection process. We implement and evaluate the proposed solution in Network Simulator 2. The result shows that this scheme performs better than the existing benchmark schemes in terms of the packet delivery ratio, overhead and throughput.

Author 1: Faheem Khan
Author 2: Sohail Abbas
Author 3: Samiullah Khan

Keywords: MANET; Core; Mirror core; Multicast routing; Receiver initiated; Mesh based routing; NS2

PDF

Paper 34: Application of Fuzzy Abduction Technique in Aerospace Dynamics

Abstract: The purpose of this paper is to apply Fuzzy Abduction Technique in aerospace dynamical problem. A model of an aeroplane is proposed for consideration at different air density level of the atmosphere and at different speed of the plane. Different air density of the atmosphere, angle of wings and speed of the plane are selected as parameters to be studied. In this paper a method is developed to determine the angle of wings of the plane with respect to its axis at different air density level of the atmosphere and at different speed of the plane. Data are given to justify our proposed method theoretically.

Author 1: Sudipta Ghosh
Author 2: Souvik Chatterjee
Author 3: Binanda Kishore Mondal
Author 4: Debasish Kundu

Keywords: Fuzzy logic; Fuzzy abduction; Aerospace dynamics; Inverse Fuzzy relation

PDF

Paper 35: Efficient Load Balancing Routing Technique for Mobile Ad Hoc Networks

Abstract: The mobile ad hoc network (MANET) is nothing but the wireless connection of mobile nodes which provides the communication and mobility among wireless nodes without the need of any physical infrastructure or centralized devices such as access point or base station. The communication in MANET is done by routing protocols. There are different categories of routing protocols introduced with different goals and objectives for MANETs such as proactive routing protocols (e.g. DSDV), reactive routing protocols (e.g. ADOV), geographic routing protocols (e.g. GRP), hybrid routing protocols etc. There are two important research problems with such routing protocols to address such as efficient load balancing and energy efficiency. In this paper, we are focusing on evaluation and analysis of efficient load balancing protocol design for MANET. Inefficient load balancing technique results in increasing routing overhead, poor packet delivery ratio, and other Quality of Service (QoS) parameters. In literature, there are a number of different methods proposed for improving the performance of routing protocols by efficient load balancing among mobile nodes communication. However, most of the methods suffer from various limitations. In this paper, we propose a novel technique for improved the QoS performance of load balancing approach as well as increasing the network lifetime. Evaluation of Network lifetime is out of scope of this paper.

Author 1: Mahdi Abdulkader Salem
Author 2: Raghav Yadav

Keywords: AODV; MANET; Load balancing; throughput; packet delivery ratio; routing overhead

PDF

Paper 36: Hybrid Deep Network and Polar Transformation Features for Static Hand Gesture Recognition in Depth Data

Abstract: Static hand gesture recognition is an interesting and challenging problem in computer vision. It is considered a significant component of Human Computer Interaction and it has attracted many research efforts from the computer vision community in recent decades for its high potential applications, such as game interaction and sign language recognition. With the recent advent of the cost-effective Kinect, depth cameras have received a great deal of attention from researchers. It promoted interest within the vision and robotics community for its broad applications. In this paper, we propose the effective hand segmentation from the full depth image that is important step before extracting the features to represent for hand gesture. We also represent the novel hand descriptor explicitly encodes the shape and appearance information from depth maps that are significant characteristics for static hand gestures. We propose hand descriptor based on Polar Transformation coordinate is called Histogram of Polar Transformation (HPT) in order to capture both shape and appearance. Beside a robust hand descriptor, a robust classification model also plays a very important role in the hand recognition model. In order to have a high performance in recognition rate, we propose hybrid model for classification based on Sparse Auto-encoder and Deep Neural Network. We demonstrate large improvements over the state-of-the-art methods on two challenging benchmark datasets are NTU Hand Digits and ASL Finger Spelling and achieve the overall accuracy as 97.7% and 84.58%, respectively. Our experiments show that the proposed method significantly outperforms state-of-the-art techniques.

Author 1: Vo Hoai Viet
Author 2: Tran Thai Son
Author 3: Ly Quoc Ngoc

Keywords: Hand Gesture Recognition; Deep Network; Polar Transformation; Depth Data

PDF

Paper 37: Test Case Reduction Techniques - Survey

Abstract: Regression testing is considered to be the most expensive phase in software testing. Therefore, regression testing reduction eliminates the redundant test cases in the regression testing suite and saves cost of this phase. In order to validate the correctness of the new version software project that resulted from maintenance phase, Regression testing reruns the regression testing suite to ensure that the new version. Several techniques are used to deal with the problem of regression testing reduction. This research is going to classify these techniques regression testing reduction problem.

Author 1: Marwah Alian
Author 2: Dima Suleiman
Author 3: Adnan Shaout

Keywords: Regression testing; Test case reduction; Test Suite

PDF

Paper 38: Energy Provisioning Technique to Balance Energy Depletion and Maximize the Lifetime of Wireless Sensor Networks

Abstract: With the promising technology of Wireless Sensor Networks (WSNs), lots of applications have been developed for monitoring and tracking in military, commercial, and educational environments. Characteristics of WSNs and resource limitation impose negative impacts on the performance and effectiveness of these applications. Imbalanced energy consumption among sensor nodes can significantly reduce the performance and lifetime of the network. In multi-hop corona WSN, the traffic imbalance among sensor nodes will make nodes located near to the sink consume more energy and finish their energy faster than those distant from the sink. This would cause what is called “energy hole”, which prevents the network from performing the intended tasks properly. The objective of the work in this paper is to balance energy consumption to help improving the lifetime of corona-based WSNs. To maximize the lifetime of the network, an innovative energy provisioning technique is proposed for harmonizing the energy consumption among coronas by computing the extra needed energy in every corona. Experimental results of the evaluation revealed that the proposed technique could improve the network lifetime noticeably via fair balancing of energy consumption ratio among coronas.

Author 1: Hassan Hamid Ekal
Author 2: Jiwa Bin Abdullah

Keywords: Wireless Sensor Network (WSN); Lifetime; Node deployment; Energy provisioning

PDF

Paper 39: Learning on High Frequency Stock Market Data Using Misclassified Instances in Ensemble

Abstract: Learning on non-stationary distribution has been shown to be a very challenging problem in machine learning and data mining, because the joint probability distribution between the data and classes changes over time. Many real time problems suffer concept drift as they changes with time. For example, in stock market, the customer’s behavior may change depending on the season of the year and on the inflation. Concept drift can occurs in the stock market for a number of reasons for example, trader’s preference for stocks change over time, increases in a stock’s value may be followed by decreases. The objective of this paper is to develop an ensemble based classification algorithm for non-stationary data stream which would consider misclassified instances during learning process. In addition, we are presenting here an exhaustive comparison of proposed algorithms with state-of-the-art classification approaches using different evaluation measures like recall, f-measure and g-mean.

Author 1: Meenakshi A.Thalor
Author 2: S.T. Patil

Keywords: Classifiers; Concept drift; Data stream; Ensemble; Non-stationary Environment

PDF

Paper 40: Multi-Objective Task Scheduling in Cloud Computing Using an Imperialist Competitive Algorithm

Abstract: Cloud computing is being welcomed as a new basis to manage and provide services on the internet. One of the reasons for increased efficiency of this environment is the appropriate structure of the tasks scheduler. Since the tasks scheduling in the cloud computing environment and distributed systems is an NP-hard problem, in most cases to optimize the scheduling issues, the meta-heuristic methods inspired by nature are used rather than traditional or greedy methods. One of the most powerful meta-heuristic methods of optimization in the complex problems is an Imperialist Competitive Algorithm (ICA). Thus, in this paper, a meta-heuristic method based on ICA is provided to optimize the scheduling issue in the cloud environment. Simulation results in MATLAB environment show the amount of 0.7 percent improvement in execution time compared with a Genetic Algorithm(GA).

Author 1: Majid Habibi
Author 2: Nima Jafari Navimipour

Keywords: Cloud Computing; Tasks scheduling; Imperialist Competitive Algorithm

PDF

Paper 41: Big Data Classification Using the SVM Classifiers with the Modified Particle Swarm Optimization and the SVM Ensembles

Abstract: The problem with development of the support vector machine (SVM) classifiers using modified particle swarm optimization (PSO) algorithm and their ensembles has been considered. Solving this problem would allow fulfilling the high-precision data classification, especially Big Data classification, with the acceptable time expenditures. The modified PSO algorithm conducts a simultaneous search of the type of kernel functions, the parameters of the kernel function and the value of the regularization parameter for the SVM classifier. The idea of particles' «regeneration» served as the basis for the modified PSO algorithm. In the implementation of this algorithm, some particles change the type of their kernel function to the one which corresponds to the particle with the best value of the classification accuracy. The offered PSO algorithm allows reducing the time expenditures for the developed SVM classifiers, which is very important for Big Data classification problem. In most cases such SVM classifier provides the high quality of data classification. In exceptional cases the SVM ensembles based on the decorrelation maximization algorithm for the different strategies of the decision-making on the data classification and the majority vote rule can be used. Also, the two-level SVM classifier has been offered. This classifier works as the group of the SVM classifiers at the first level and as the SVM classifier on the base of the modified PSO algorithm at the second level. The results of experimental studies confirm the efficiency of the offered approaches for Big Data classification.

Author 1: Liliya Demidova
Author 2: Evgeny Nikulchev
Author 3: Yulia Sokolova

Keywords: Big Data; classification; ensemble; SVM classifier; kernel function type; kernel function parameters; particle swarm optimization algorithm; regularization parameter; support vectors

PDF

Paper 42: Nonlinear Condition Tolerancing Using Monte Carlo Simulation

Abstract: To ensure accuracy and performance of the products, designers tend to hug the tolerances. While, manufacturers prefer to increase them in order to reduce costs and ensure competition. The analysis and synthesis of tolerances aim on studying their influence on conformity with functional requirements. This study may be conducted in the case of the most unfavorable configurations with the "worst case" method, or "in all cases" using the statistical approach. However, having a nonlinear condition make it difficult to analyse the influence of parameters on the functional condition. In this work, we are interested in the tolerance analysis of a mechanism presenting a nonlinear functional condition (slider crank mechanism). To do this we'll develop an approach of tolerances analysis combining the worst case and the statistical methods.

Author 1: JOUILEL Naima
Author 2: ELGADARI M’hammed
Author 3: RADOUANI Mohammed
Author 4: EL FAHIME Benaissa

Keywords: Worst case tolerancing; statistical tolerancing; Monte Carlo simulation; nonlinear condition; slider crank system

PDF

Paper 43: AES Inspired Hex Symbols Steganography for Anti-Forensic Artifacts on Android Devices

Abstract: Mobile phones technology has become one of the most common and important technologies that started as a communication tool and then evolved into key reservoirs of personal information and smart applications. With this increased level of complications, increased dangers and increased levels of countermeasures and opposing countermeasures have emerged, such as Mobile Forensics and anti-forensics. One of these anti-forensics tools is steganography, which introduced higher levels of complexity and security against hackers’ attacks but simultaneously create obstacles to forensic investigations. In this paper we proposed a new data hiding approach, the AES Inspired Steganography (AIS), which utilizes some AES data encryption concepts while hiding the data using the concept of hex symbols steganography. As the approach is based on the use of multiple encryption steps, the resulting carrier files would be unfathomable without the use of the cipher key agreed upon by the communicating parties. These carrier files can be exchanged amongst android devices and/or computers. Assessments of the proposed approach have proven it to be advantageous over the currently existing steganography approaches in terms of character frequency, security, robustness, length of key, and Compatibility.

Author 1: Somyia M. Abu Asbeh
Author 2: Sarah M. Hammoudeh
Author 3: Arab M. Hammoudeh

Keywords: Mobile Forensics; Anti-Forensics; Artifact Wiping; Data Hiding; Steganography; AES

PDF

Paper 44: Implementation of Novel Medical Image Compression Using Artificial Intelligence

Abstract: The medical image processing process is one of the most important areas of research in medical applications in digitized medical information. A medical images have a large sizes. Since the coming of digital medical information, the important challenge is to care for the conduction and requirements of huge data, including medical images. Compression is considered as one of the necessary algorithm to explain this problem. A large amount of medical images must be compressed using lossless compression. This paper proposes a new medical image compression algorithm founded on lifting wavelet transform CDF 9/7 joined with SPIHT coding algorithm, this algorithm applied the lifting composition to confirm the benefit of the wavelet transform. To develop the proposed algorithm, the outcomes compared with other compression algorithm like JPEG codec. Experimental results proves that the anticipated algorithm is superior to another algorithm in both lossy and lossless compression for all medical images tested. The Wavelet-SPIHT algorithm provides PSNR very important values for MRI images.

Author 1: Mohammad Al-Rababah
Author 2: Abdusamad Al-Marghirani

Keywords: Medical image; lossless Compression; lifting wavelets; CDF9/7; Lifting scheme; SPIHT coding

PDF

Paper 45: EDAC: A Novel Energy-Aware Clustering Algorithm for Wireless Sensor Networks

Abstract: Clustering is a useful technique for reducing energy consumption in wireless sensor networks (WSN). To achieve a better network lifetime performance, different clustering algorithms use various parameters for cluster head (CH) selection. For example, the sensor's own residual energy as well as the network's total residual energy are used. In this paper, we propose an energy-distance aware clustering (EDAC) algorithm that incorporates both the residual energy levels of sensors within a cluster radius as well as the distances. To achieve this, we define a metric that is calculated at each sensor based on local information within its neighborhood. This metric is incorporated within the CH selection probability. Using this metric, one can choose the sensors with low residual energy levels to have the greatest impact on CH selection which results in CH selection being biased to be close to these sensors. This results in reducing their communication energy cost to the CH. Simulation results indicate that our proposed EDAC algorithm outperforms both the LEACH and the energy-efficient DEEC protocols in terms of network lifetime.

Author 1: Ahmad A. Ababneh
Author 2: Ebtessam Al-Zboun

Keywords: Clustering algorithms; Sensor networks

PDF

Paper 46: Artificial Neural Networks and Support Vector Machine for Voice Disorders Identification

Abstract: The diagnosis of voice diseases through the invasive medical techniques is an efficient way but it is often uncomfortable for patients, therefore, the automatic speech recognition methods have attracted more and more interest recent years and have known a real success in the identification of voice impairments. In this context, this paper proposes a reliable algorithm for voice disorders identification based on two classification algorithms; the Artificial Neural Networks (ANN) and the Support Vector Machine (SVM). The feature extraction task is performed by the Mel Frequency Cepstral Coefficients (MFCC) and their first and second derivatives. In addition, the Linear Discriminant Analysis (LDA) is proposed as feature selection procedure in order to enhance the discriminative ability of the algorithm and minimize its complexity. The proposed voice disorders identification system is evaluated based on a widespread performance measures such as the accuracy, sensitivity, specificity, precision and Area Under Curve (AUC).

Author 1: Nawel SOUISSI
Author 2: Adnane CHERIF

Keywords: Automatic Speech Recognition (ASR); Pathological voices; Artificial Neural Networks (ANN); Support Vector Machine (SVM); Linear Discriminant Analysis (LDA); Mel Frequency Cepstral Coefficients (MFCC)

PDF

Paper 47: Evaluating Damage Potential in Security Risk Scoring Models

Abstract: A Continuous Monitoring System (CMS) model is presented, having new improved capabilities. The system is based on the actual real-time configuration of the system. Existing risk scoring models assume damage potential is estimated by systems' owner, thus rejecting the information relying in the technological configuration. The assumption underlying this research is based on users' ability to estimate business impacts relating to systems' external interfaces which they use regularly in their business activities, but are unable to assess business impacts relating to internal technological components. According to the proposed model systems' damage potential is calculated using technical information on systems' components using a directed graph. The graph is incorporated into the Common Vulnerability Scoring Systems' (CVSS) algorithm to produce risk scoring measures. Framework presentation includes system design, damage potential scoring algorithm design and an illustration of scoring computations.

Author 1: Eli Weintraub

Keywords: CVSS; security; risk management; configuration; Continuous Monitoring; vulnerability; damage potential; risk scoring

PDF

Paper 48: Conservative Noise Filters

Abstract: Noisy training data have a huge negative impact on machine learning algorithms. Noise-filtering algorithms have been proposed to eliminate such noisy instances. In this work, we empirically show that the most popular noise-filtering algorithms have a large False Positive (FP) error rate. In other words, these noise filters mistakenly identify genuine instances as outliers and eliminate them. Therefore, we propose more conservative outlier identification criteria that improve the FP error rate and, thus, the performance of the noise filters. With the new filter, an instance is eliminated if and only if it is misclassified by a mutual decision of Naïve Bayesian (NB) classifier and the original filtering criteria being used. The number of genuine instances that are incorrectly eliminated is reduced as a result, thereby improving the classification accuracy.

Author 1: Mona M.Jamjoom
Author 2: Khalil El Hindi

Keywords: component; Instance Reduction Techniques; Instance-Based Learning; Class noise; Noise Filter; Naive Bayesian; Outlier; False Positive

PDF

Paper 49: Awareness Training Transfer and Information Security Content Development for Healthcare Industry

Abstract: Electronic Health Record (EHR) becomes increasingly pervasive and the need to safeguard EHR becomes more vital for healthcare organizations. Human error is known as the biggest threat to information security in Electronic Health Systems that can be minimized through awareness training programs. There are various techniques available for awareness of information security. However, research is scant regarding effective information security awareness delivery methods. It is essential that effective awareness training delivery method is selected, designed, and executed to ensure the appropriate protection of organizational assets. This study adapts Holton’s transfer of training model to develop a framework for effective information security awareness training program. The framework provides guidelines for organizations to select an effective delivery method based on the organizations’ needs and success factor, and to create information security content from a selected healthcare’s internal information security policy and related international standards. Organizations should make continual efforts to ensure that content of policy is effectively communicated to the employees.

Author 1: Arash Ghazvini
Author 2: Zarina Shukur

Keywords: information security; human error; awareness training program; training content; security policy; electronic health record

PDF

Paper 50: Face Detection and Recognition Using Viola-Jones with PCA-LDA and Square Euclidean Distance

Abstract: In this paper, an automatic face recognition system is proposed based on appearance-based features that focus on the entire face image rather than local facial features. The first step in face recognition system is face detection. Viola-Jones face detection method that capable of processing images extremely while achieving high detection rates is used. This method has the most impact in the 2000’s and known as the first object detection framework to provide relevant object detection that can run in real time. Feature extraction and dimension reduction method will be applied after face detection. Principal Component Analysis (PCA) method is widely used in pattern recognition. Linear Discriminant Analysis (LDA) method that used to overcome drawback the PCA has been successfully applied to face recognition. It is achieved by projecting the image onto the Eigenface space by PCA after that implementing pure LDA over it. Square Euclidean Distance (SED) is used. The distance between two images is a major concern in pattern recognition. The distance between the vectors of two images leads to image similarity. The proposed method is tested on three databases (MUCT, Face94, and Grimace). Different number of training and testing images are used to evaluate the system performance and it show that increasing the number of training images will increase the recognition rate.

Author 1: Nawaf Hazim Barnouti
Author 2: Sinan Sameer Mahmood Al-Dabbagh
Author 3: Wael Esam Matti
Author 4: Mustafa Abdul Sahib Naser

Keywords: Face Detection; Face Recognition; PCA; LDA; Viola-Jones; Feature Extraction; Distance Measurement; MATLAB; MUCT; Face94; Grimace

PDF

Paper 51: Data Mining Framework for Generating Sales Decision Making Information Using Association Rules

Abstract: The rapid technological development in the field of information and communication technology (ICT) has enabled the databases of super shops to be organized under a countrywide sales decision making network to develop intelligent business systems by generating enriched business policies. This paper presents a data mining framework for generating sales decision making information from sales data using association rules generated from valid user input item set with respect to the sales data under analysis. The proposed framework includes super shop’s raw database storing sales data collected through sales application systems at different Point of Sale (POS) terminals. Apriori algorithm is famous for association rule discovery from the transactional database. The proposed technique using customized association rule generation and analysis checks the input items with sales data for validation of the input items. The support and confidence of each rule are computed. Sales decision making information about input items is generated by analyzing each of the generated association rules, which can be used to improve sales decision making policy to attract customers in order to increase sales. It is hoped that this approach for generating sales decision making information by analyzing sales data using association rules is more specific decision and application oriented as the business decision makers are not usually interested to all of the items of the sales database for making a specific sales decision.

Author 1: Md. Humayun Kabir

Keywords: databases; data mining framework; Apriori algorithm; association rule; sales decision making information

PDF

Paper 52: A Reversible Data Hiding Scheme for BTC-Compressed Images

Abstract: This paper proposes a reversible data hiding scheme for BTC-compressed images. A block in the BTC-compressed image consists of a larger block-mean pixel and a smaller block-mean pixel. Two message bits are embedded into a pair of neighboring blocks. One is embedded by expanding the difference between the two larger block-mean pixels and the other is embedded by expanding the one between the two smaller block-mean pixels. Experimental results show that the embedding strategy may decrease the modification of images. The proposed scheme may obtain a stego-image with high visual quality and a payload capacity of one bit per block, approximately.

Author 1: Ching-Chiuan Lin
Author 2: Shih-Chieh Chen
Author 3: Kuo Feng Hwang
Author 4: Chi-Ming Yao

Keywords: Block Truncation Coding; Reversible Data Hiding; Difference Expansion

PDF

Paper 53: Delay-Decomposition Stability Approach of Nonlinear Neutral Systems with Mixed Time-Varying Delays

Abstract: This paper deals with the asymptotic stability of neutral systems with mixed time-varying delays and nonlinear perturbations. Based on the Lyapunov–Krasovskii functional including the triple integral terms and free weighting matrices approach, a novel delay-decomposition stability criterion is obtained. The main idea of the proposed method is to divide each delay interval into two equal segments. Then, the Lyapunov–Krasovskii functional is used to split the bounds of integral terms of each subinterval. In order to reduce the stability criterion conservatism, delay-dependent sufficient conditions are performed in terms of Linear Matrix Inequalities (LMIs) technique. Finally, numerical simulations are given to show the effectiveness of the proposed stability approach.

Author 1: Ilyes MAZHOUD
Author 2: Issam AMRI
Author 3: Dhaou SOUDANI

Keywords: Neutral systems; Lyapunov–Krasovskii approach; asymptotic stability; mixed time-varying delays; nonlinear perturbations; Linear Matrix Inequalities (LMIs)

PDF

Paper 54: Analyzing Virtual Machine Live Migration in Application Data Context

Abstract: Virtualization plays a very vital role in the big cloud federation. Live and Real-time virtual machine migration is always a challenging task in virtualized environment, different approaches, techniques and models have already been presented and implemented by many re- searchers. The aim of this work is to investigate various parameters of Real-time and live data migration of virtual machines in stateful and data context at the application level. The migration of one virtual machine to another requires some time depending on the network bandwidth, guest availability, hardware limitation overcomes, resource allocation, server reallocation, hypervisor compatibility and many more. To enhance and ensure the performance and optimization of the time this work presents the some analysis in the form of different time stacks in multiple piece of data stored in the virtual machines. To optimize the migration time virtual machine checkpoints are used in order to achieve the better results by using the xen hypervisor memory technique which dynamically allows the migration of the configured memory while the allocated memory could be discarded for a while. By this the bad memory remains un-migrated only the good memory consisting the used data would be migrated by means of Real-time.

Author 1: Mutiullah Shaikh
Author 2: Asadullah Shaikh
Author 3: Muhammad Ali Memon
Author 4: Farah Deeba

Keywords: component; Cloud Computing; Virtualization; Virtual Machine Monitor VMM; Xen; VMResume; Xen Save and Restore; DC Data Centers Copy on Write CoW

PDF

Paper 55: Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

Abstract: Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in deep web that can be useful to gain new insight for various domains, creating need to access the information from the deep web by developing efficient techniques. As the amount of Web content grows rapidly, the types of data sources are proliferating, which often provide heterogeneous data. So we need to select Deep Web Data sources that can be used by the integration systems. The paper discusses various techniques that can be used to surface the deep web information and techniques for Deep Web Source Selection.

Author 1: Khushboo Khurana
Author 2: M.B. Chandak

Keywords: Deep Web; Surfacing Deep Web; Source Selection; Deep Web Crawler; Schema Matching

PDF

Paper 56: Optimization of Channel Coding for Transmitted Image Using Quincunx Wavelets Transforms Compression

Abstract: Many images you see on the Internet today have undergone compression for various reasons. Image compression can benefit users by having pictures load faster and webpages use up less space on a Web host. Image compression does not reduce the physical size of an image but instead compresses the data that makes up the image into a smaller size. In case of image transmission the noise will decrease the quality of recivide image which obliges us to use channel coding techniques to protect our data against the channel noise. The Reed-Solomon code is one of the most popular channel coding techniques used to correct errors in many systems ((Wireless or mobile communications, Satellite communications, Digital television / DVB,High-speed modems such as ADSL, xDSL, etc.). Since there is lot of possibilities to select the input parameters of RS code this will make us concerned about the optimum input that can protect our data with minimum number of redundant bits. In this paper we are going to use the genetic algorithm to optimize in the selction of input parameters of RS code acording to the channel conditions wich reduce the number of bits needed to protect our data with hight quality of received image.

Author 1: Mustapha Khelifi
Author 2: Abdelmounaim Moulay Lakhdar
Author 3: Iman Elawady

Keywords: Code Rate; Optimization; Quincunx Wavelets Transforms compression; Genetic Algorithm; BSC channel; Reed-Solomon codes

PDF

Paper 57: Fault-Tolerant Fusion Algorithm of Trajectory and Attitude of Spacecraft Based on Multi-Station Measurement Data

Abstract: Aiming at the practical situation that the navigation processes of spacecrafts usually rely on several different kinds of tracking equipments which track the spacecraft by turns, a series of new outlier-tolerant fusion algorithms are build to determine the whole flight path as well as attitude parameters. In these new algorithms, the famous gradient descent methods are used to find out the outliers-tolerant flight paths from an integrated data-fusion function designed delicately. In this paper, these new algorithms are used to determine reliably the flight paths and attitude parameters in the situation that a spacecraft is tracked by a series of equipments working by turns and there are some outliers arising in the data series. Advantages of these new algorithms are not only plenary fusion of all of the data series from different kinds of equipments but also discriminatory usage: on the one hand, if the data are dependable, the useable information contained in these data are sufficiently used; on the other hand, if the data are outliers, the bad information from these data are efficiently eliminated from these algorithms. In this way, all of the computational flight paths and attitude parameters are insured to be consistent and reliable.

Author 1: YANG Xiaoyan
Author 2: HU Shaolin
Author 3: YU Hui
Author 4: LI Shaomini Xi’an

Keywords: trajectory; fault-tolerance; data fusion

PDF

Paper 58: Discrete-Time Approximation for Nonlinear Continuous Systems with Time Delays

Abstract: This paper is concerned with the discretization of nonlinear continuous time delay systems. Our approach is based on Taylor-Lie series. The main idea aims to minimize the effect of the delay and neglects the importance of nonlinear parameter by the linearization of the system study in an attempt to make its handling and easier programming as possible. We investigate a new method based on the development of new theoretical methods for the time discretization of nonlinear systems with time delay .The performance of these proposed discretization methods was validated by doing the numerical simulation using a nonlinear system with state delay. Some illustrative examples are given to show the effectiveness of the obtained results.

Author 1: Bemri H’mida
Author 2: Soudani Dhaou

Keywords: Discrete-time systems; Time-delay systems; Taylor-Lie series; non-linear systems; Simulation

PDF

Paper 59: New Data Clustering Algorithm (NDCA)

Abstract: Wireless sensor networks (WSNs) have sensing, data processing and communicating capabilities. The major task of the sensor node is to gather the data from the sensed field and send it to the end user via the base station (BS). To satisfy the scalability and prolong the network lifetime the sensor nodes are grouped into clusters. This paper proposes a new clustering algorithm named New Data Clustering Algorithm (NDCA). It takes optimal number of the clusters and the data packets sent from the surrounding environment to be the cluster head (CH) selection criteria.

Author 1: Abdullah Abdulkarem
Author 2: Imane Aly Saroit Ismail
Author 3: Amira Mohammed Kotb

Keywords: Wireless sensor network; clustering; energy efficiency; cluster head selection

PDF

Paper 60: Load Balancing in Partner-Based Scheduling Algorithm for Grid Workflow

Abstract: Automated advance reservation has the potential to ensure a good scheduling solution in computational Grids. To improve global throughput of Grid system and enhance resource utilization, workload has to be distributed among the resources of the Grid evenly. This paper discusses the problem of load distribution and resource utilization in heterogeneous Grids in advance reservation environment. We have proposed an extension of Partner Based Dynamic Critical Path for Grids algorithm named Balanced Partner Based Dynamic Critical Path for Grids (B-PDCPG) that incorporates a hybrid and threshold based mechanism to achieve load balancing to an allowed value of variation in workload among the resources in Partner Based Dynamic Critical Path for Grids algorithm. The proposed load balancing technique uses Utilization Profiles to store the reservation details and check the loads from these profiles on each of the resources and links. The load is distributed among resources based on the processing element capacity and number of processing units on resources. The simulation results, using Gridsim simulation engine, show that the proposed technique has balanced the workload very effectively and has provided better utilization of resources while decreasing the workflow makespan.

Author 1: Muhammad Roman
Author 2: Jawad Ashraf
Author 3: Asad Habib
Author 4: Gohar Ali

Keywords: Load Balancing; Advance Reservation; Resource Utilization; Workflow Scheduling; Job Distribution

PDF

Paper 61: TMCC: An Optimal Mechanism for Congestion Control in Wireless Sensor Networks

Abstract: Most proposed methods for congestion control of Wireless Sensor Networks (WSNs) have disadvantages such as central congestion control mechanism through the sink node, using only one traffic control or resource control mechanism and also having the same throughput on all nodes. For the purpose of addressing these problems, in this paper, a new congestion control protocol is presented in order to increase network lifetime and reliability of WSNs. Since the priority of generated traffic in network level is not uniform in WSNs, an architecture framework is proposed based on priority of generated traffics for service identification in network level in order to meet better service quality and efficiency. The proposed method called TMCC has been compared with Traffic-Aware Dynamic Routing (TADR) method to show the effectiveness of the proposed method in terms of end to end delay, throughput, power consumption and lifetime of network.

Author 1: Razieh Golgiri
Author 2: Reza Javidan

Keywords: Wireless Sensor Network (WSN); traffic management; resource control; alternative path; QoS; TMCC

PDF

Paper 62: Static Filtered Sky Color Constancy

Abstract: In Computer Vision, the sky color is used for lighting correction, image color enhancement, horizon alignment, image indexing, and outdoor image classification and in many other applications. In this article, for robust color based sky segmentation and detection, usage of lighting correction for sky color detection is investigated. As such, the impact of color constancy on sky color detection algorithms is evaluated and investigated. The color correction (constancy) algorithms used includes Gray-Edge (GE), Gray-World (GW), Max-RGB (MRGB) and Shades-of-Gray (SG). The algorithms GE, GW, MRGB, and SG, are tested on the static filtered sky modeling. The static filter is developed in the LAB color space. This evaluation and analysis is essential for detection scenarios, especially, color based object detection in outdoor scenes. From the results, it is concluded that the color constancy before sky color detection using LAB static filters has the potential of improving sky color detection performance. However, the application of the color constancy can impart adverse effects on the detection results. For images, the color constancy algorithms depict a compact and stable representative of the sky chroma loci, however, the sky color locus might have a shifting and deviation in a particular color representation. Since the sky static filters are using the static chromatic values, different results can be obtained by applying color constancy algorithms on various datasets.

Author 1: Ali Alkhalifah

Keywords: Static Filter; Color Constancy; LAB color space; Sky Color Detection; Horizon detection

PDF

Paper 63: An Approach to Finding Similarity Between Two Community Graphs Using Graph Mining Techniques

Abstract: Graph similarity has studied in the fields of shape retrieval, object recognition, face recognition and many more areas. Sometimes it is important to compare two community graphs for similarity which makes easier for mining the reliable knowledge from a large community graph. Once the similarity is done then, the necessary mining of knowledge can be extracted from only one community graph rather than both which leads saving of time. This paper proposes an algorithm for similarity check of two community graphs using graph mining techniques. Since a large community graph is difficult to visualize, so compression is essential. This proposed method seems to be easier and faster while checking for similarity between two community graphs since the comparison is between the two compressed community graphs rather than the actual large community graphs.

Author 1: Bapuji Rao
Author 2: Saroja Nanda Mishra

Keywords: community graph; compressed community graph; dissimilar edges; self-loop; similar edges; weighted adjacency matrix

PDF

Paper 64: A New Approach for Enhancing the Quality of Medical Computerized Tomography Images

Abstract: Computerized tomography (CT) images contribute immensely to medical research and diagnosis. However, due to degradative factors such as noise, low contrast, and blurring, CT images tend to be a degraded representation of the actual body or part under investigation. To reduce the risk of imprecise diagnosis associated with poor-quality CT images, this paper presents a new technique designed to enhance the quality of medical CT images. The main objective is to improve the appearance of CT images in order to obtain better visual interpretation and analysis, which is expected to ease the diagnosis process. The proposed technique involves applying a median filter to remove noise from the CT images and then using a Laplacian filter to enhance the edges and the contrast in the images. Also, as CT images suffer from low contrast, a Contrast Limited Adaptive Histogram Equalization transform is also applied to solve this problem. The main strength of this transform is its modest computational requirements, ease of application, and excellent results for most images. According to a subjective assessment by a group of radiologists, the proposed technique resulted in excellent enhancement, including that of the contrast and the edges of medical CT images. From a medical perspective, the proposed technique was able to clarify the arteries, tissues, and lung nodules in the CT images. In addition, blurred nodules in chest CT images were enhanced effectively. Therefore the proposed technique can help radiologists to better detect lung nodules and can also assist in diagnosing the presence of tumours and in the detection of abnormal growths.

Author 1: Mutaz Al-Frejat
Author 2: Mohammad HjoujBtoush

Keywords: Spatial domain; CT image; Laplacian filter; MedPix database; Lung nodules

PDF

Paper 65: An Enhencment Medical Image Compression Algorithm Based on Neural Network

Abstract: The main objective of medical image compression is to attain the best possible fidelity for an available communication and storage [6], in order to preserve the information contained in the image and does not have an error when they are processing it. In this work, we propose a medical image compression algorithm based on Artificial Neural Network (ANN). It is a simple algorithm which preserves all the image data. Experimental results performed at 8 bits/pixels and 12bits/pixels medical images show the performances and the efficiency of the proposed method. To determine the ‘acceptability’ of image compression we have used different criteria such as maximum absolute error (MAE), universal image quality (UIQ), correlation and peak signal to noise ratio (PSNR).

Author 1: Manel Dridi
Author 2: Mohamed Ali Hajjaji
Author 3: Belgacem Bouallegue
Author 4: Abdellatif Mtibaa

Keywords: Artificial Neural Network; medical image; compression; DICOM; PSNR; CR

PDF

Paper 66: A QoS Solution for NDN in the Presence of Congestion Control Mechanism

Abstract: Both congestion control and Quality of Service (QoS) are important quality attributes in computer networks. Specifically, for the future Internet architecture known as Named Data Networking (NDN), solutions using hop-by-hop interest shaping have shown to cope with the traffic congestion issue. Ad-hoc techniques for implementing QoS in NDN have been proposed. In this paper, we propose a new QoS mechanism that can work on top of an existing congestion control based on interest shaping. Our solution provides four priority levels, which are assigned to packets and lead to different QoS. Simulations show that high priority applications are consistently served first, while at the same time low priority applications never starve. Results in ndnSIM simulator also demonstrate that we avoid congestion while operating at optimal throughputs.

Author 1: Abdullah Alshahrani
Author 2: Izzat Alsmadi

Keywords: Named Data Networking; Quality of Service; Con-gestion Control

PDF

Paper 67: An Algorithmic approach for abstracting transient states in timed systems

Abstract: In previous works, the timed logic TCTL was extended with importants modalities, in order to abstract transient states that last for less than k time units. For all modalities of this extension, called TCTL?, the decidability of the model-checking problem has been proved with an appropriate extension of Alur and Dill’s region graph. But this theoretical result does not support a natural implementation due to its state-space explosion problem. This is not surprising since, even for TCTL timed logics, the model checking algorithm that is implemented in tools like UPPAAL or KRONOS is based on a so-called zone algorithm and data structures like DBMs, rather than on explicit sets of regions. In this paper, we propose a symbolic model-checking algorithm which computes the characteristic sets of some TCTL? formulae and checks their truth values. This algorithm generalizes the zone algorithm for TCTL timed logics. We also present a complete correctness proof of this algorithm, and we describe its implementation using the DBM data structure.

Author 1: Mohammed Achkari Begdouri
Author 2: Houda Bel Mokadem
Author 3: Mohamed El Haddad

Keywords: Timed automata, symbolic model checking, back-ward analysis algorithm, correctness, data structures

PDF

Paper 68: Automatic Diagnosing of Suspicious Lesions in Digital Mammograms

Abstract: Breast cancer is the most common cancer and the leading cause of morbidity and mortality among women’s age between 50 and 74 years across the worldwide. In this paper we’ve proposed a method to detect the suspicious lesions in mammograms, extracting their features and classify them as Normal or Abnormal and Benign or Malignant for diagnosing of breast cancer. This method consists of two major parts: The first one is detection of regions of interest (ROIs). The second one is diagnosing of detected ROIs. This method was tested by Mini Mammography Image Analysis Society (Mini-MIAS) database. To check method’s performance, we’ve used FROC (Free-Receiver Operating Characteristics) curve in the detection part and ROC (Receiver Operating Characteristics) curve in the diagnosis part. Obtained results show that the performance of detection part has sensitivity of 94.27% at 0.67 false positive per image. The performance of diagnosis part has 94.29% accuracy, with 94.11% sensitivity, 94.44% specificity in the classification as normal or abnormal mammogram, and has achieved 94.4%accuracy, with 96.15% sensitivity and 94.54% specificity in the classification as Benign or Malignant mammogram.

Author 1: Abdelali ELMOUFIDI
Author 2: Khalid El Fahssi
Author 3: Said Jai-andaloussi
Author 4: Abderrahim Sekkaki
Author 5: Gwenole Quellec
Author 6: Mathieu Lamard
Author 7: Guy Cazuguel

Keywords: Breast cancer, Mammogram, Computer-aided diagnosis, Segmentation, Regions of interest, Support Vector Machine, FROC analysis, ROC analysis

PDF

Paper 69: Detection and Counting of On-Tree Citrus Fruit for Crop Yield Estimation

Abstract: In this paper, we present a technique to estimate citrus fruit yield from the tree images. Manually counting the fruit for yield estimation for marketing and other managerial tasks is time consuming and requires human resources, which do not always come cheap. Different approaches have been used for the said purpose, yet separation of fruit from its background poses challenges, and renders the exercise inaccurate. In this paper, we use k-means segmentation for recognition of fruit, which segments the image accurately thus enabling more accurate yield estimation. We created a dataset containing 83 tree images with 4001 citrus fruits from three different fields. We are able to detect the on-tree fruits with an accuracy of 91.3%. In addition, we find a strong correlation between the manual and the automated fruit count by getting coefficients of determination R2 up to 0.99.

Author 1: Zeeshan Malik
Author 2: Sheikh Ziauddin
Author 3: Ahmad R. Shahid
Author 4: Asad Safi

Keywords: Precision agriculture; yield estimation; k-means segmentation; leaf occlusion;, illumination; morphology

PDF

Paper 70: Diversity-Based Boosting Algorithm

Abstract: Boosting is a well known and efficient technique for constructing a classifier ensemble. An ensemble is built incrementally by altering the distribution of training data set and forcing learners to focus on misclassification errors. In this paper, an improvement to Boosting algorithm called DivBoosting algorithm is proposed and studied. Experiments on several data sets are conducted on both Boosting and DivBoosting. The experimental results show that DivBoosting is a promising method for ensemble pruning. We believe that it has many advantages over traditional boosting method because its mechanism is not solely based on selecting the most accurate base classifiers but also based on selecting the most diverse set of classifiers.

Author 1: Jafar A. Alzubi

Keywords: Artificial Intelligence; Classification; Boosting; Di-versity; Game Theory

PDF

Paper 71: Efficient Verification-Driven Slicing of UML/OCL Class Diagrams

Abstract: Model defects are a significant concern in the Model-Driven Development (MDD) paradigm, as model trans-formations and code generation may propagate errors present in the model to other notations where they are harder to detect and trace. Formal verification techniques can check the correctness of a model, but their high computational complexity can limit their scalability. Current approaches to this problem have an exponential worst-case run time. In this paper, we propose a slicing technique which breaks a model into several independent submodels from which irrelevant information can be abstracted to improve the scalability of the verification process. We consider a specific static model (UML class diagrams annotated with unrestricted OCL constraints) and a specific property to verify (satisfiability, i.e., whether it is possible to create objects without violating any constraints). The definition of the slicing procedure ensures that the property under verification is preserved after partitioning. Furthermore, the paper provides an evaluation of experimental results from a real-world case study.

Author 1: Asadullah Shaikh
Author 2: Uffe Kock Wiil

Keywords: MDD; UML; OCL; Model Slicing; Efficient Verification

PDF

Paper 72: NEB in Analysis of Natural Image 8 × 8 and 9 × 9 High-contrast Patches

Abstract: In this paper we use the nudged elastic band tech-nique from computational chemistry to investigate sampled high-dimensional data from a natural image database. We randomly sample 8 × 8 and 9 × 9 high-contrast patches of natural images and create a density estimator believed as a Morse function. By the Morse function we build one-dimensional cell complexes from the sampled data. Using one-dimensional cell complexes, we identify topological properties of 8 × 8 and 9 × 9 high-contrast natural image patches, we show that there exist two kinds of subsets of high-contrast 8 × 8 and 9 × 9 patches modeled as a circle, by the new method we confirm some results obtained through the method of computational topology.

Author 1: Shengxiang Xia
Author 2: Wen Wang

Keywords: nudged elastic band; natural image high-contrast patch; cell complex; density function

PDF

Paper 73: Object Conveyance Algorithm for Multiple Mobile Robots based on Object Shape and Size

Abstract: This paper describes a determination method of a number of a team for multiple mobile robot object conveyance. The number of robot on multiple mobile robot systems is the factor of complexity on robots formation and motion control. In our previous research, we verified the use of the complex-valued neural network for controlling multiple mobile robots in object conveyance problem. Though it is a significant issue to develop effective determination team member for multiple mobile robot object conveyance, few studies have been done on it. Therefore, we propose an algorithm for determining the number of the team member on multiple mobile robot object conveyance with grasping push. The team member is determined based on object weight to obtain appropriate formation. First, the object shape and size measurement is carried out by a surveyor robot that approaches and surrounds the object. During surrounding the object, the surveyor robot measures its distance to the object and records for estimating the object shape and size. Since the object shape and size are estimated, the surveyor robot makes initial push position on the estimated push point and calls additional robots for cooperative push. The algorithm is validated in several computer simulations with varying object shape and size. As a result, the proposed algorithm is promising for minimizing the number of the robot on multiple mobile robot object conveyance.

Author 1: Purnomo Sejati
Author 2: Hiroshi Suzuki
Author 3: Takahiro Kitajima
Author 4: Akinobu Kuwahara
Author 5: Takashi Yasuno

Keywords: multiple mobile robots, object conveyance, team member determination

PDF

Paper 74: On the Use of Arabic Tweets to Predict Stock Market Changes in the Arab World

Abstract: Social media users nowadays express their opinions and feelings about many event occurring in their lives. For certain users, some of the most important events are the ones related to the financial markets. An interesting research field emerged over the past decade to study the possible relationship between the fluctuation in the financial markets and the online social media. In this research we present a comprehensive study to identify the relation between Arabic financial-related tweets and the change in stock markets using a set of the most active Arab stock indices. The results show that there is a Granger Causality relation between the volume and sentiment of Arabic tweets and the change in some of the stock markets.

Author 1: Khalid AlKhatib
Author 2: Abdullateef Rabab’ah
Author 3: Mahmoud Al-Ayyoub
Author 4: Yaser Jararweh

Keywords: Twitter; Sentiment Analysis; Granger Causality; Pearson Correlation; Arab Stock Market

PDF

Paper 75: Optical Character Recognition System for Urdu Words in Nastaliq Font

Abstract: Optical Character Recognition (OCR) has been an attractive research area for the last three decades and mature OCR systems reporting near to 100% recognition rates are available for many scripts/languages today. Despite these develop-ments, research on recognition of text in many languages is still in its early days, Urdu being one of them. The limited existing literature on Urdu OCR is either limited to isolated characters or considers limited vocabularies in fixed font sizes. This research presents a segmentation free and size invariant technique for recognition of Urdu words in Nastaliq font using ligatures as units of recognition. Ligatures, separated into primary ligatures and diacritics, are recognized using right-to-left HMMs. Diacritics are then associated with the main body using position information and the resulting ligatures are validated using a dictionary. The system evaluated on Urdu words realized promising recognition rates at ligature and word levels.

Author 1: Safia Shabbir
Author 2: Imran Siddiqi

Keywords: Optical Character Recognition; Urdu Text; Liga-tures; Hidden Markov Models; Clustering

PDF

Paper 76: Semantic Feature Based Arabic Opinion Mining Using Ontology

Abstract: with the increase of opinionated reviews on the web, automatically analyzing and extracting knowledge from those reviews is very important. However, it is a challenging task to be done manually. Opinion mining is a text mining discipline that automatically performs such a task. Most researches done in this field were focused on English texts with very limited researches on Arabic language. This scarcity is because there are a lot of obstacles in Arabic. The aim of this paper is to develop a novel semantic feature-based opinion mining framework for Arabic reviews. This framework utilizes the semantic of ontologies and lexicons in the identification of opinion features and their polarity. Experiments showed that the proposed framework achieved a good level of performance compared with manually collected test data.

Author 1: Abdullah M. Alkadri
Author 2: Abeer M. ElKorany

Keywords: Opinion Mining; Sentimental Analysis; Ontology; Feature extraction; Polarity identification

PDF

Paper 77: The Information-Seeking Problem in Human-Technology Interaction

Abstract: In the history of information-seeking, the intention of a query and the posed query have some level of distance between them. Because human query-responders are innately connected to times and trends and have the ability to understand natural language and human intention, they have often been the idealistic sources of knowledge-direction. As the quantity of depth of knowledge of humanity grows, technological systems have sought to utilize natural language, both spoken and written, as a format of accepted queries. Modern works seek to improve such systems utilizing distance-metrics of literal queries to understood questions with maps to knowledge-bases. However, these methods do not often take into account the value of information in terms of query interpretation for mapping and as such may have identifiable limitations compared with human responders. In this paper, a model for information value is proposed and existing works in speech and query recognition are discussed relative to their considerations of information value.

Author 1: Mohammad Alsulami
Author 2: Asadullah Shaikh

Keywords: Information seeking; seeking problem in HCI; HCI Information seeking

PDF

Paper 78: Virtual Heterogeneous Model Integration Layer

Abstract: The classic way of building a software today sim-plistically consists in connecting a piece of code calling a method with the piece of code implementing that method. We consider these piece of code (software systems) not calling anything, behaving in a non deterministic way and providing complex sets of services in different domains. In software engineering reusability is the holly grail, and specially the reusability of code from autonomus tools requires powerful compostion/integration mechanisms. These systems are developed by different developers and being modified inceremently. Integrating these autonomous tools generate various conflicts. To deal with these conflicts, current integration mechanisms defines specific set of rules to resolve these conflicts and accompalish integration. Indeed still there is a big chance that changes made by other developers, or they update their changes in order to make them compliant with other developers cancel the updates done by others. The approach presented here claims three contributions in the field of Hetrogeneous Software Integration. First, this approach eliminate the need of conflicts resolving mechanism. Secondly, it provides the mechanism to work in the presence of conflicts without resolving them. Finally, contribution is that the integration mechanism does not affect if either of the system evolves. We do this by introducing an intermediate virtual layer between two systems that introduce a delta models which consist of three parts; viability that share required elements, hiding that hide conflicting elements and aliasing that aliases same concepts in both systems.

Author 1: Muhammad Ali Memon
Author 2: Asadullah Shaikh
Author 3: Khizer Hayat
Author 4: Mutiullah Shaikh

Keywords: Model Driven Engineering; Co-evolution; Co-adaptation; Delta models; Model Integration

PDF

Paper 79: A Survey of Cloud Migration Methods: A Comparison and Proposition

Abstract: Along with the significant advantages of cloud computing paradigm, the number of enterprises, which expect to move a legacy system towards a cloud, is steadily increasing. Unfortunately, this move is not straightforward. There are many challenges to take up. The applications are often written with the outdated technologies. While some enterprises redevelop applications with a specific Cloud provider in mind, others try to move the legacy systems, either because the organization wants to keep the past investments, or because the legacy systems hold important data. Migrating the legacy systems to the Cloud introduces technical and business challenges. This paper aims to study deeply and to compare existing Cloud migration methods, based on Model Driven Engineering (MDE) approach to highlight the strengths and weaknesses of each one. Finally, we have proposed a Cloud legacy system Migration Method relied on Architecture Driven Modernization (ADM), and explained its working process.

Author 1: Khadija SABIRI
Author 2: Faouzia BENABBOU
Author 3: Mustapha HAIN
Author 4: Hicham MOUTACHAOUIK
Author 5: Khalid AKODADI

Keywords: Application on premise; Migration methods; Cloud migration Method; PIM; PSM; ADM

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org