The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Metadata Harvesting (OAI2)
  • Digital Archiving Policy
  • Promote your Publication

IJACSA

  • About the Journal
  • Call for Papers
  • Author Guidelines
  • Fees/ APC
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Guidelines
  • Fees
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Subscribe

IJACSA Volume 4 Issue 6

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: A multi-scale method for automatically extracting the dominant features of cervical vertebrae in CT images

Abstract: Localization of the dominant points of cervical spines in medical images is important for improving the medical automation in clinical head and neck applications. In order to automatically identify the dominant points of cervical vertebrae in neck CT images with precision, we propose a method based on multi-scale contour analysis to analyzing the deformable shape of spines. To extract the spine contour, we introduce a method to automatically generate the initial contour of the spine shape, and the distance field for level set active contour iterations can also be deduced. In the shape analysis stage, we at first coarsely segment the extracted contour with zero-crossing points of the curvature based on the analysis with curvature scale space, and the spine shape is modeled with the analysis of curvature scale space. Then, each segmented curve is analyzed geometrically based on the turning angle property at different scales, and the local extreme points are extracted and verified as the dominant feature points. The vertices of the shape contour are approximately derived with the analysis at coarse scale, and then adjusted precisely at fine scale. Consequently, the results of experiment show that we approach a success rate of 93.4% and accuracy of 0.37mm by comparing with the manual results.

Author 1: Tung-Ying Wu
Author 2: Sheng-Fuu Lin

Keywords: cervical spine; active contour; curvature scale space; turning angle.

Download PDF

Paper 2: Evolutionary approach to optimisation of the operation of electric power distribution networks

Abstract: An idea of using a classifying system and co-evolutionary algorithm for operation support of electric power distribution systems operators has been presented in the paper. The method proposed by the author of the work is typified by the short time of designating the most rational post breakdown configurations in complex electric power Medium Voltage distribution network structures. It is the use by the classifying system working with the co-evolution algorithm that enables the effective creation of substitute scenarios for the Medium Voltage electric power distribution network. The method drawn up may be used in current systems managing the work of distribution networks to assist network operators in taking decisions concerning connection actions in supervised electric power systems.

Author 1: Jan Stepien
Author 2: Sylwester Filipiak

Keywords: evolutionary algorithms; distribution power networks; electric breakdown

Download PDF

Paper 3: Expected Reliability of Everyday- and Ambient Assisted Living Technologies

Abstract: To receive valuable information about expected reliability in everyday technologies compared to Ambient Assisted Living (AAL) technologies, an online survey was conducted including five everyday (train, dishwasher, navigation system, computer, mobile phone) and three AAL (stove, window, floor sensors) technologies. The age range of the 206 participants (109 men; 97 female) was from 14 to 88 years (mean=38.0). The descriptive analysis indicates expected reliabilities of more than 90% for most technologies. Only train punctuality is considered as less reliable with a mean expected reliability of 86%. Furthermore, by using t-tests it can be shown that the three AAL technologies are expected to have a higher reliability than the everyday technologies. Additionally, a sample split at the age of 50 years indicates that elderly participants expect that technologies have a higher reliability than younger participants do. Using these findings, in a next step an experiment with different reliability levels of AAL technologies will be designed. This differentiation will be used to measure the influence of reliability on trust and intention to use in context of Ambient Assisted Living.

Author 1: Frederick Steinke
Author 2: Tobias Fritsch
Author 3: Andreas Hertzer
Author 4: Helmut Tautz
Author 5: Simon Zickwolf

Keywords: Ambient Assisted Living; Elderly People; Expected Reliability; Online Survey; Technology

Download PDF

Paper 4: Modeling the Cut-off Frequency of Acoustic Signal with an Adaptative Neuro-Fuzzy Inference System (ANFIS)

Abstract: An Adaptative Neuro-Fuzzy Inference System (ANFIS), new flexible tool, is applied to predict the cut-off frequencies of the symmetric and the anti-symmetric circumferential waves (Si and Ai, i=1,2) propagating around an elastic aluminum cylindrical shell of various radius ratio b/a (a: outer radius and b: inner radius). The time-frequency of Wigner-Ville and the proper modes theory are used in this study to compare and valid the frequencies values predicted by the ANFIS model. The useful data, of the cut-off frequencies (ka)c, are used to train and to test the performances of the model. These data are determined from the values calculated using the proper modes theory of resonances and also from those determined using the time-frequency images of Wigner-Ville. The material density, the radius ratio b/a, the index i of the symmetric and the anti-symmetric circumferential waves, and the longitudinal and transverse velocities of the material constituting the tube, are selected as the input parameters of the ANFIS model. This technique is able to model and to predict the cut-off frequencies, of the symmetric and the anti-symmetric circumferential waves, with a high precision, based on different estimation errors such as mean relative error (MRE), mean absolute error (MAE) and standard error (SE). A good agreement is obtained between the output values predicted using the propose model and those computed by the proper modes theory.

Author 1: Y. NAHRAOUI
Author 2: E.H. AASSIF
Author 3: G.Maze
Author 4: R.LATIF

Keywords: ANFIS; time-frequency; SPWV; Acoustic scattering, acoustic circumferential waves; cut-off frequency;cylindrical shell.

Download PDF

Paper 5: The quest towards a winning Enterprise 2.0 collaboration technology adoption strategy

Abstract: Although Enterprise 2.0 collaboration technologies present enterprises with a significant amount of business benefits; enterprises are still facing challenges in promoting and sustaining end-user adoption. The purpose of this paper is to provide a systematic review on Enterprise 2.0 collaboration technology adoption models, challenges, as well as to provide emerging statistic approaches that purport to address these challenges. The paper will present four critical Enterprise 2.0 adoption elements that need to form part of an Enterprise 2.0 collaboration technology adoption strategy. The four critical elements were derived from the ‘SHARE 2013 for business users’ conference conducted in Johannesburg, South Africa 2013, as well as a review of the existing literature. The four adoption elements include enterprise strategic alignment, adoption strategy, governance, and communication, training and support. The four critical Enterprise 2.0 adoption elements will allow enterprises to ensure strategic alignment between the chosen Enterprise 2.0 collaboration technology toolset and the chosen business strategies. In addition by reviewing and selecting an appropriate adoption strategy that incorporates governance, communication and a training and support system, the enterprise can improve its ability towards a successful Enterprise 2.0 adoption campaign.

Author 1: Robert Louw
Author 2: Jabu Mtsweni

Keywords: Web 2.0; Enterprise 2.0; collaboration; technology adoption; adoption strategy; critical adoption elements

Download PDF

Paper 6: Face Recognition System Based on Different Artificial Neural Networks Models and Training Algorithms

Abstract: Face recognition is one of the biometric methods that is used to identify any given face image using the main features of this face. In this research, a face recognition system was suggested based on four Artificial Neural Network (ANN) models separately: feed forward backpropagation neural network (FFBPNN), cascade forward backpropagation neural network (CFBPNN), function fitting neural network (FitNet) and pattern recognition neural network (PatternNet). Each model was constructed separately with 7 layers (input layer, 5 hidden layers each with 15 hidden units and output layer). Six ANN training algorithms (TRAINLM, TRAINBFG, TRAINBR, TRAINCGF, TRAINGD, and TRAINGD) were used to train each model separately. Many experiments were conducted for each one of the four models based on 6 different training algorithms. The performance results of these models were compared according to mean square error and recognition rate to identify the best ANN model. The results showed that the PatternNet model was the best model used. Finally, comparisons between the used training algorithms were performed. Comparison results showed that TrainLM was the best training algorithm for the face recognition system.

Author 1: Omaima N. A. AL-Allaf
Author 2: Abdelfatah Aref Tamimi
Author 3: Mohammad A. Alia

Keywords: Face Recognition; Backpropagation Neural Network (BPNN); Feed Forward Neural Network; Cascade Forward; Function Fitting; Pattern Recognition

Download PDF

Paper 7: Image Blocks Model for Improving Accuracy in Identification Systems of Wood Type

Abstract: Image-based recognition systems commonly use an extracted image from the target object using texture analysis. However, some of the proposed and implemented recognitionues systems of wood types up to this time have not been achieving adequatue accuracy, efficiency and feasable execution speed with respect to practicality. This paper discussed a new method of image-based recognition system for wood type identification by dividing the wood image into several blocks, each of which is extracted using gray image and edge detection techniques. The wood feature analysis concentrates on three parameters entropy, standard deviation, and correlation. Our experiment results showed that our method can increase the recognition accuracy up to 95%, which is faster and better than the previous existing method with 85% recognition accuracy. Moreover, our method needs only to analyze three feature parameters compared to the previous existing method needs to analyze seven feature parameters, ang thus implying a simpler and faster recognition process.

Author 1: Gasim
Author 2: Kudang Boro Seminar
Author 3: Agus Harjoko
Author 4: Sri Hartati

Keywords: image processing; pattern recognition; ANN; wood identification.

Download PDF

Paper 8: A Strategy for Training Set Selection in Text Classification Problems

Abstract: An issue in text classification problems involves the choice of good samples on which to train the classifier. Training sets that properly represent the characteristics of each class have a better chance of establishing a successful predictor. Moreover, sometimes data are redundant or take large amounts of computing time for the learning process. To overcome this issue, data selection techniques have been proposed, including instance selection. Some data mining techniques are based on nearest neighbors, ordered removals, random sampling, particle swarms or evolutionary methods. The weaknesses of these methods usually involve a lack of accuracy, lack of robustness when the amount of data increases, over?tting and a high complexity. This work proposes a new immune-inspired suppressive mechanism that involves selection. As a result, data that are not relevant for a classifier’s ?nal model are eliminated from the training process. Experiments show the e?ectiveness of this method, and the results are compared to other techniques; these results show that the proposed method has the advantage of being accurate and robust for large data sets, with less complexity in the algorithm.

Author 1: Maria Luiza C. Passini
Author 2: Katiusca B. Estébanez
Author 3: Grazziela P. Figueredo
Author 4: Nelson F. F. Ebecken

Keywords: text mining; data reduction; classification problems; feature selection

Download PDF

Paper 9: Study of the capacity of Optical Network On Chip based on MIMO (Multiple Input Multiple Output) system

Abstract: When designing Optical Networks-On-Chip, designers have resorted to make dialogue between emitters (lasers) and receivers (photo-detectors) through a waveguide which is based mainly on optical routers called ?-router. In this paper, we propose a new method based on the Multiple Input Multiple Output concept, and we give a model of the channel propagation, then we study the influence of different parameters in the design of Optical Networks-On-Chip.

Author 1: S. Mhatli
Author 2: B.Nsiri
Author 3: R.Attia

Keywords: ? –ROUTER; MIMO CHANNELS; CAPACITY; CDMA

Download PDF

Paper 10: Face Recognition as an Authentication Technique in Electronic Voting

Abstract: In this research a Face Detection and Recognition system (FDR) used as an Authentication technique in online voting, which one of electronic is voting types, is proposed. Web based voting allows the voter to vote from any place in state or out of state. The voter’s image is captured and passed to a face detection algorithm (Eigenface or Gabor filter) which is used to detect his face from the image and save it as the first matching point. The voter’s National identification card number is used to retrieve and return his saved photo from the database of the Supreme Council elections (SCE) which is passed to the same detection algorithm (Eigenface or Gabor filter) to detect face from it and save it as second matching point. The two matching points are used by a matching algorithm to check wither they are identical or not. If the results of the matching algorithm are two point match then checks wither this person has the right to vote or not. If he has right to vote then a voting form is presented to him. The result shows that the proposed algorithm capable of finding over 90% of the faces in database and allows their voter to vote in approximately 58 seconds.

Author 1: Noha E. El-Sayad
Author 2: Rabab Farouk Abdel-Kader
Author 3: Mahmoud Ibraheem Marie

Keywords: Electronic Voting; Face Recognition; Gabor Filter; Eigenface.

Download PDF

Paper 11: Generating a Domain Specific Inspection Evaluation Method through an Adaptive Framework

Abstract: The electronic information revolution and the use of computers as an essential part of everyday life are now more widespread than ever before, as the Internet is exploited for the speedy transfer of data and business. Social networking sites (SNSs), such as LinkedIn, Ecademy and Google+ are growing in use worldwide, and they present popular business channels on the Internet. However, they need to be continuously evaluated and monitored to measure their levels of efficiency, effectiveness and user satisfaction, ultimately to improve quality. Nearly all previous studies have used Heuristic Evaluation (HE) and User Testing (UT) methodologies, which have become the accepted methods for the usability evaluation of User Interface Design (UID); however, the former is general, and unlikely to encompass all usability attributes for all website domains. The latter is expensive, time consuming and misses consistency problems. To address this need, a new evaluation method is developed using traditional evaluations (HE and UT) in novel ways. The lack of an adaptive methodological framework that can be used to generate a domain- specific evaluation method, which can then be used to improve the usability assessment process for a product in any chosen domain, represents a missing area in usability testing. This paper proposes an adaptive framework that is readily capable of adaptation to any domain, and then evaluates it by generating an evaluation method for assessing and improving the usability of products in a particular domain. The evaluation method is called Domain Specific Inspection (DSI), and it is empirically, analytically and statistically tested by applying it on three websites in the social networks domain. Our experiments show that the adaptive framework is able to build a formative and summative evaluation method that provides optimal results with regard to our newly identified set of comprehensive usability problem areas as well as relevant usability evaluation method (UEM) metrics, with minimum input in terms of the cost and time usually spent on employing traditional usability evaluation methods (UEMs).

Author 1: Roobaea AlRoobaea
Author 2: Ali H. Al-Badi
Author 3: Pam J. Mayhew

Keywords: Heuristic Evaluation (HE); User Testing (UT); Domain Specific Inspection (DSI); adaptive framework; social networks domain.

Download PDF

Paper 12: Proposed Multi-Modal Palm Veins-Face Biometric Authentication

Abstract: Biometric authentication technology identifies people by their unique biological information. An account holder’s body characteristics or behaviors are registered in a database and then compared with others who may try to access that account to see if the attempt is legitimate. Since veins are internal to the human body, its information is hard to duplicate. Compared with a finger or the back of a hand, a palm has a broader and more complicated vascular pattern and thus contains a wealth of differentiating features for personal identification. However, a single biometric is not sufficient to meet the variety of requirements, including matching performance imposed by several large-scale authentication systems. Multi-modal biometric systems seek to alleviate some of the drawbacks encountered by uni-modal biometric systems by consolidating the evidence presented by multiple biometric traits/sources. This paper proposes a multi-modal authentication technique based on Palm Veins as a personal identifying factor, augmented by face features to increase the accuracy of security recognition. The obtained results point at an increased authentication accuracy.

Author 1: S.F. Bahgat
Author 2: S. Ghoniemy
Author 3: M. Alotaibi

Keywords: Biometric authentication; Face Recognition; Feature Fusion; Palm veins; Statistical features.

Download PDF

Paper 13: Micro Sourcing Strategic Framework for Low Income Group

Abstract: The role of ICTs among poor people and communities has increased tremendously. One of the ICT industries – the micro sourcing industry – has been identified as one of a potential industry to help increase income for the poor in Malaysia. Micro sourcing is an effective way to accomplish tedious tasks at a faster rate. It involves large projects that are broken down into micro tasks. These micro tasks are well-defined and then distributed to a group of workers. The objective of this study is to develop the strategic framework of micro sourcing to generate income for the low income group. Four methods were used to gather information for this study. The methods used were documentation and literature reviews, focus group meetings, workshops and interviews. Based on the analysis of the current scenario of local micro sourcing industry, strategic framework was developed based on the five Strategic Thrusts identified. The Strategic Thrusts are harnessing demand side (job providers) of domestic and international market; platform capacity and capability building; leverage and utilise existing infrastructure; uplift and enhance capability of the supply side (micro workers); and instruments to expedite growth of local micro sourcing industry. The Strategic Framework is intended to provide strategic direction at national level to all stakeholders; to highlight key areas that need to be addressed in order to grow a sustainable micro sourcing industry in the country; and to serve as a guideline in the implementation of programs and plans related to micro sourcing industry development

Author 1: Noor Habibah Arshad
Author 2: Siti Salwa Salleh
Author 3: Syaripah Ruzaini Syed Aris
Author 4: Norjansalika Janom
Author 5: Norazam Mastuki

Keywords: Capability building; expedite growth; harnessing demand; platform capacity; strategic thrusts

Download PDF

Paper 14: A New Algorithm to Represent Texture Images

Abstract: In recent times the spatial autoregressive models have been extensively used to represent images. In this paper we propose an algorithm to represent and reproduce texture images based on the estimation of spatial autoregressive processes. The image intensity is locally modeled by a first spatial autoregressive model with support in a strongly causal prediction region on the plane. A basic criteria to quantify similarity between two images is used to locally select this region among four different possibilities, corresponding to the four strongly causal regions on the plane. Two global image similarity measures are used to evaluate the performance of our proposal.

Author 1: Silvia María Ojeda
Author 2: Grisel Maribel Britos

Keywords: Autoregressive Models; Texture Images; Similarity Measures.

Download PDF

Paper 15: Image and Video based double watermark extraction spread spectrum watermarking in low variance region

Abstract: Digital watermarking plays a very important role in copyright protection. It is one of the techniques which are used for safeguarding the origins of the image, audio and video by protecting it against Piracy. This paper proposes a low variance based spread spectrum watermarking for image and video in which the watermark is obtained twice in the receiver. The watermark to be added is a binary image of comparatively smaller size than the Cover Image. Cover Image is divided into number of 8x8 blocks and transform into frequency domain using Discrete Cosine Transform. A gold sequence is added as well as subtracted in each block for each watermark bit. In most cases, researchers has generally used algorithms for extracting single watermark and also it is seen that finding the location of the distorted bit of the watermark due to some attacks is one of the most challenging task. However, in this paper the same watermark is embedded as well as extracted twice with gold code without much distortion of the image and comparing these two watermarks will help in finding the distorted bit. Another feature is that as this algorithm is based on embedding of watermark in low variance region, therefore proper extraction of the watermark is obtained at a lesser modulating factor. The proposed algorithm is very much useful in applications like real-time broad casting, image and video authentication and secure camera system. The experimental results show that the watermarking technique is robust against various attacks.

Author 1: Mriganka Gogoi
Author 2: Koushik Mahanta
Author 3: H.M.Khalid Raihan Bhuyan
Author 4: Dibya Jyoti Das
Author 5: Ankita Dutta

Keywords: Watermark;Gold Code; Variance; Correlation.

Download PDF

Paper 16: A Framework for Creating a Distributed Rendering Environment on the Compute Clusters

Abstract: This paper discusses the deployment of existing render farm manager in a typical compute cluster environment such as a university. Usually, both a render farm and a compute cluster use different queue managers and assume total control over the physical resources. But, taking out the physical resources from an existing compute cluster in a university-like environment whose primary use of the cluster is to run numerical simulations may not be possible. It can potentially reduce the overall resource utilization in a situation where compute tasks are more than rendering tasks. Moreover, it can increase the system administration cost. In this paper, a framework has been proposed that creates a dynamic distributed rendering environment on top of the compute clusters using existing render farm managers without requiring the physical separation of the resources.

Author 1: Ali Sheharyar
Author 2: Othmane Bouhali

Keywords: distributed; rendering; animation; render farm; cluster

Download PDF

Paper 17: Integrating Social Network Services with Vehicle Tracking Technologies

Abstract: This paper gives design, and implementation of a newly proposed vehicle tracking system, that uses the popular social network as a value added service for traditional tracking system. The proposed tracking system make use of Google maps service to trace the vehicle, each vehicle has an account that contains a posts of Google maps that display the vehicle location on real time mode. A hardware module is inside the vehicle that uses Global Positioning System (GPS) – to detect vehicle location- and Global system for mobile communication (GSM) – to update vehicle location in vehicle account on social network -. System uses the well-known Arduino microcontroller to control GSM-GPS Modem. The proposed system can be used for a broad range of applications such as traffic management and vehicle tracking/ anti theift system, and finally traffic routing and navigation. it can be applied in many business cases, like public transportation, so passengers can track their buses, trains, by following the vehicle account on social network. It also can be used in private business sector as an easy and simple fleet tracking and management system , or can be used by anyone who wants to track his car, or to find his way in case he get lost.

Author 1: Ahmed ElShafee
Author 2: Mahmoud ElMenshawi
Author 3: Mena Saeed

Keywords: Vehicle Tracking; GSM; GPS; Microcontrollers; Twitter; Google maps.

Download PDF

Paper 18: An Efficient Approach for Image Filtering by Using Neighbors pixels

Abstract: Image Processing refers to the use of algorithm to perform processing on digital image. Microscopic images like some microorganism images contain different type of noises which reduce the quality of the images. Removing noise is a difficult task. Noise removal is an issue of image processing. Images containing noise degrade the quality of the images. Removing noise is an important processing task. After removing noise from the images, the visual effect will not be proper. This paper presents an approach to de-noise based on averaging of pixels in 5X5 window is proposed.

Author 1: Smrity Prasad
Author 2: N.Ganesan

Keywords: Salt & Pepper Noise; Filter; PSNR; MSE

Download PDF

Paper 19: A Comparative Study of Three TDMA Digital Cellular Mobile Systems (GSM, IS-136 NA-TDMA and PDC) Based On Radio Aspect

Abstract: As mobile and personal communication services and networks involve providing seamless global roaming and improve quality of service to its users, the role of such network for numbering and identification and quality of service will become increasingly important, and well defined. All these will enhance performance for the present as well as future mobile and personal communication network, provide national management function in mobile communication network and provide national and international roaming. Moreover, these require standardized subscriber and identities. To meet these demands, mobile computing would use standard networks. Thus, in this study the researcher attempts to highlight a comparative picture of the three standard digital cellular mobile communication systems: (i) Global System for Mobile (GSM) -- The European Time Division Multiple Access (TDMA) Digital Cellular Standard, (ii) Interim Standard-136 (IS-136) -- The North American TDMA Digital Cellular Standard (D-AMPS), and (iii) Personal Digital Cellular (PDC) -- The Japanese TDMA Digital Cellular Standard.

Author 1: Laishram Prabhakar

Keywords: Comparative Study; GSM; IS-136 TDMA; PDC;Radio Aspect

Download PDF

Paper 20: Format SPARQL Query Results into HTML Report

Abstract: SPARQL is one of the powerful query language for querying semantic data. It is recognized by the W3C as a query language for RDF. As an efficient query language for RDF, it has defined several query result formats such as CSV, TSV and XML etc. These formats are not attractive, understandable and readable. The results need to be converted in an appropriate format so that user can easily understand. The above formats require additional transformations or tool support to represent the query result in user readable format. The main aim of this paper is to propose a method to build HTML report dynamically for SPARQL query results. This enables SPARQL query result display, in HTML report format easily, in an attractive understandable format without the support of any additional or external tools or transformation.

Author 1: Dr Sunitha Abburu
Author 2: G.Suresh Babu

Keywords: SPARQL query; Oracle database 11g semantic store; Jena adapter; HTML report.

Download PDF

Paper 21: A Comprehensive Evaluation of Weight Growth and Weight Elimination Methods Using the Tangent Plane Algorithm

Abstract: The tangent plane algorithm is a fast sequential learning method for multilayered feedforward neural networks that accepts almost zero initial conditions for the connection weights with the expectation that only the minimum number of weights will be activated. However, the inclusion of a tendency to move away from the origin in weight space can lead to large weights that are harmful to generalization. This paper evaluates two techniques used to limit the size of the weights, weight growing and weight elimination, in the tangent plane algorithm. Comparative tests were carried out using the Extreme Learning Machine which is a fast global minimiser giving good generalization. Experimental results show that the generalization performance of the tangent plane algorithm with weight elimination is at least as good as the ELM algorithm making it a suitable alternative for problems that involve time varying data such as EEG and ECG signals.

Author 1: P May
Author 2: E Zhou
Author 3: C. W. Lee

Keywords: neural networks; backpropagation; generalization; tangent plane; weight elimination; extreme learning machine

Download PDF

Paper 22: Exploiting the Role of Hardware Prefetchers in Multicore Processors

Abstract: The processor-memory speed gap referred to as memory wall, has become much wider in multi core processors due to a number of cores sharing the processor-memory interface. In addition to other cache optimization techniques, the mechanism of prefetching instructions and data has been used effectively to close the processor-memory speed gap and lower the memory wall. A number of issues have emerged when prefetching is used aggressively in multicore processors. The results presented in this paper are an indicator of the problems that need to be taken into consideration while using prefetching as a default technique. This paper also quantifies the amount of degradation that applications face with the aggressive use of prefetching. Another aspect that is investigated is the performance of multicore processors using a multiprogram workload as compared to a single program workload while varying the configuration of the built-in hardware prefetchers. Parallel workloads are also investigated to estimate the speedup and the effect of hardware prefetchers. This paper is the outcome of work that forms a part of the PhD research project currently in progress at NED University of Engineering and Technology, Karachi.

Author 1: Hasina Khatoon
Author 2: Shahid Hafeez Mirza
Author 3: Talat Altaf

Keywords: Multicore; prefetchers; prefetch-sensitive; memory wall; aggressive prefetching; multiprogram workload; parallel workload.

Download PDF

Paper 23: Improving Assessment Management Using Tools

Abstract: This paper firstly explains the importance of assessment management, then introduces two assessment tools currently used in the School of Information Technology at Deakin University. A comparison of assignment marking was conducted after collecting test data from three sets of assignments. The importance of providing detailed marking guides and personalized comments is emphasized and future possible extension to the tools is also discussed at the end of this paper.

Author 1: Shang Gao
Author 2: Jo Coldwell-Neilson
Author 3: Andrzej Goscinski

Keywords: assessment management; WebCT Vista; Desire2Learn; CloudDeakin; marking guide; personalized comment; Markers Assistant; On-line Grades System

Download PDF

Paper 24: Data fusion based framework for the recognition of Isolated Handwritten Kannada Numerals

Abstract: combining classifiers appears as a natural step forward when a critical mass of knowledge of single classifier models has been accumulated. Although there are many unanswered questions about matching classifiers to real-life problems, combining classifiers is rapidly growing and enjoying a lot of attention from pattern recognition and machine learning communities. For any pattern classification task, an increase in data size, number of classes, dimension of the feature space and interclass separability affect the performance of any classifier. It is essential to know the effect of the training dataset size on the recognition performance of a feature extraction method and classifier. In this paper, an attempt is made to measure the performance of the classifier by testing the classifier with two different datasets of different sizes. In practical classification applications, if the number of classes and multiple feature sets for pattern samples are given, a desirable recognition performance can be achieved by data fusion. In this paper, we have proposed a framework based on the combined concepts of decision fusion and feature fusion for the isolated handwritten Kannada numerals classification. The proposed method improves the classification result. From the experimental results it is seen that there is an increase of 13.95% in the recognition accuracy.

Author 1: Mamatha. H.R
Author 2: Sucharitha Srirangaprasad
Author 3: Srikantamurthy K

Keywords: feature selection; feature fusion; decision fusion; Curvelet transform; K-NN classifier; data fusion; isolated handwritten Kannada numerals; OCR;

Download PDF

Paper 25: Designing a Markov Model for the Analysis of 2-tier Cognitive Radio Network

Abstract: Cognitive Radio Network (CRN) aims to reduce spectrum congestion by allowing secondary users to utilize idle spectrum bands in the absence of primary users. However, the overall user capacity and hence, the system throughput is bounded by the total number of available idle channels in the system. This paper aims to solve the problem of limited user capacity in basic CRN by proposing a 2-tier CRN that allows another tier (or layer) of secondary users to transmit, in addition to the already existing set of primary and secondary users in the system. Markov Models are designed step-wise to map the interaction between primary and secondary users in both tiers by including suitable traffic distribution models and system parameters. Spectrum handoff is also incorporated in the developed Markov Models. Performance analysis is carried out in terms of SU transmission, dropping, blocking and handoff probabilities along with mathematical formulation of the overall SU throughput in 2-tier CRN. It confirms better spectrum utilization in spectrum handoff enabled 2-tier CRN over basic CRN with enhancement in quality of service for secondary users in terms of reduced dropping and blocking probabilities.

Author 1: Tamal Chakraborty
Author 2: Iti Saha Misra

Keywords: Cognitive Radio Network; 2-tier; Voice over IP; Markov Model; Spectrum Handoff

Download PDF

Paper 26: A Fuzzy Rule Based Forensic Analysis of DDoS Attack in MANET

Abstract: Mobile Ad Hoc Network (MANET) is a mobile distributed wireless networks. In MANET each node are self capable that support routing functionality in an ad hoc scenario, forwarding of data or exchange of topology information using wireless communications. These characteristic specifies a better scalability of network. But this advantage leads to the scope of security compromising. One of the easy ways of security compromise is denial of services (DoS) form of attack, this attack may paralyze a node or the entire network and when coordinated by group of attackers is considered as distributed denial of services (DDoS) attack. A typical, DoS attack is flooding excessive volume of traffic to deplete key resources of the target network. In MANET flooding can be done at routing. Ad Hoc nature of MANET calls for dynamic route management. In flat ad hoc routing categories there falls the reactive protocols sub category, in which one of the most prominent member of this subcategory is dynamic source routing (DSR) which works well for smaller number of nodes and low mobility situations. DSR allows on demand route discovery, for this they broadcast a route request message (RREQ). Intelligently flooding RREQ message there forth causing DoS or DDoS attack, making targeted network paralyzed for a small duration of time is not very difficult to launch and have potential of loss to the network. After an attack on the target system is successful enough to crash or disrupt MANET for some period of time, this event of breach triggers for investigation. Investigation and forensically analyzing attack scenario provides the source of digital proof against attacker. In this paper, the parameters for RREQ flooding are pointed, on basis of these parameters fuzzy logic based rules are deduced and described for both DoS and DDoS. We implemented a fuzzy forensic tool to determine the flooding RREQ attack of the form DoS and DDoS. For this implementation various experiments and results are elaborated in this paper.

Author 1: Ms. Sarah Ahmed
Author 2: Ms. S. M. Nirkhi

Keywords: DoS and DDoS attack; DSR; Fuzzy logic; MANET; Network forensic analysis.

Download PDF

Paper 27: A comparative study of Image Region-Based Segmentation Algorithms

Abstract: Image segmentation has recently become an essential step in image processing as it mainly conditions the interpretation which is done afterwards. It is still difficult to justify the accuracy of a segmentation algorithm, regardless of the nature of the treated image. In this paper we perform an objective comparison of region-based segmentation techniques such as supervised and unsupervised deterministic classification, non-parametric and parametric probabilistic classification. Eight methods among the well-known and used in the scientific community have been selected and compared. The Martin’s(GCE, LCE), probabilistic Rand Index (RI), Variation of Information (VI) and Boundary Displacement Error (BDE) criteria are used to evaluate the performance of these algorithms on Magnetic Resonance (MR) brain images, synthetic MR image, and synthetic images. MR brain image are composed of the gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) and others, and the synthetic MR image composed of the same for real image and the plus edema, and the tumor. Results show that segmentation is an image dependent process and that some of the evaluated methods are well suited for a better segmentation.

Author 1: Lahouaoui LALAOUI
Author 2: Tayeb MOHAMADI

Keywords: Evaluation criteria; Martin’s; Rand Index; Image Segmentation; Magnetic resonance image.

Download PDF

Paper 28: Automated Classification of L/R Hand Movement EEG Signals using Advanced Feature Extraction and Machine Learning

Abstract: In this paper, we propose an automated computer platform for the purpose of classifying Electroencephalography (EEG) signals associated with left and right hand movements using a hybrid system that uses advanced feature extraction techniques and machine learning algorithms. It is known that EEG represents the brain activity by the electrical voltage fluctuations along the scalp, and Brain-Computer Interface (BCI) is a device that enables the use of the brain’s neural activity to communicate with others or to control machines, artificial limbs, or robots without direct physical movements. In our research work, we aspired to find the best feature extraction method that enables the differentiation between left and right executed fist movements through various classification algorithms. The EEG dataset used in this research was created and contributed to PhysioNet by the developers of the BCI2000 instrumentation system. Data was preprocessed using the EEGLAB MATLAB toolbox and artifacts removal was done using AAR. Data was epoched on the basis of Event-Related (De) Synchronization (ERD/ERS) and movement-related cortical potentials (MRCP) features. Mu/beta rhythms were isolated for the ERD/ERS analysis and delta rhythms were isolated for the MRCP analysis. The Independent Component Analysis (ICA) spatial filter was applied on related channels for noise reduction and isolation of both artifactually and neutrally generated EEG sources. The final feature vector included the ERD, ERS, and MRCP features in addition to the mean, power and energy of the activations of the resulting Independent Components (ICs) of the epoched feature datasets. The datasets were inputted into two machine-learning algorithms: Neural Networks (NNs) and Support Vector Machines (SVMs). Intensive experiments were carried out and optimum classification performances of 89.8 and 97.1 were obtained using NN and SVM, respectively. This research shows that this method of feature extraction holds some promise for the classification of various pairs of motor movements, which can be used in a BCI context to mentally control a computer or machine.

Author 1: Mohammad H. Alomari
Author 2: Aya Samaha
Author 3: Khaled AlKamha

Keywords: EEG; BCI; ICA; MRCP; ERD/ERS; machine learning; NN; SVM

Download PDF

Paper 29: Case Study of Named Entity Recognition in Odia Using Crf++ Tool

Abstract: NER have been regarded as an efficient strategy to extract relevant entities for various purposes. The aim of this paper is to exploit conventional method for NER in Odia by parameterizing CRF++ tool in different ways. As a case study, we have used gazetteer and POS tag to generate different feature set in order to compare the performance of NER task. Comparison study demonstrates how proposed NER system works on different feature set.

Author 1: Dr.Rakesh ch. Balabantaray
Author 2: Suprava Das
Author 3: Kshirabdhi Tanaya Mishra

Keywords: Named Entity Recognition; CRF++ Tool; Odia Named Entity

Download PDF

Paper 30: TX-Kw: An Effective Temporal XML Keyword Search

Abstract: Inspired by the great success of information retrieval (IR) style keyword search on the web, keyword search on XML has emerged recently. Existing methods cannot resolve challenges addressed by using keyword search in Temporal XML documents. We propose a way to evaluate temporal keyword search queries over Temporal XML documents. Moreover, we propose a new ranking method based on the time-aware IR ranking methods to rank temporal keyword search queries results. Extensive experiments have been conducted to show the effectiveness of our approach.

Author 1: Rasha Bin-Thalab
Author 2: Neamat El-Tazi
Author 3: Mohamed E.El-Sharkawi

Keywords: temporal XML; Keyword Search; ranking

Download PDF

Paper 31: OntoVerbal: a Generic Tool and Practical Application to SNOMED CT

Abstract: Ontology development is a non-trivial task requiring expertise in the chosen ontological language. We propose a method for making the content of ontologies more transparent by presenting, through the use of natural language generation, naturalistic descriptions of ontology classes as textual paragraphs. The method has been implemented in a proof-of-concept system, OntoVerbal, that automatically generates paragraph-sized textual descriptions of ontological classes expressed in OWL. OntoVerbal has been applied to ontologies that can be loaded into Protégé and been evaluated with SNOMED CT, showing that it provides coherent, well-structured and accurate textual descriptions of ontology classes.

Author 1: Shao Fen Liang
Author 2: Donia Scott
Author 3: Robert Stevens
Author 4: Alan Rector

Keywords: ontology verbalisation; natural language generation; OWL; SNOMED CT

Download PDF

Paper 32: Development of Copeland Score Methods for Determine Group Decisions

Abstract: Voting method requires to determine group decision of decision by each decision maker in group. Determination of decisions by group of decision maker requires voting methods. Copeland score is one of voting method that has been developed by previous researchers. This method does not accommodate the weight of the expertise and interests of each decision maker. This paper proposed the voting method using Copeland score with added weighting. The method has developed of considering the weight of the expertise and interests of the decision maker. The method accordance with the problems encountered of group decision making . Expertise and interests of decision makers are given weight based on their expertises of decision maker contribution of the problems faced by the group to determine the decision.

Author 1: Ermatita
Author 2: Sri Hartati
Author 3: Retantyo Wardoyo
Author 4: Agus Harjoko

Keywords: Group Decision Support System; Copeland Score.

Download PDF

Paper 33: New electronic white cane for stair case detection and recognition using ultrasonic sensor

Abstract: Blinds people need some aid to interact with their environment with more security. A new device is then proposed to enable them to see the world with their ears. Considering not only system requirements but also technology cost, we used, for the conception of our tool, ultrasonic sensors and one monocular camera to enable user being aware of the presence and nature of potential encountered obstacles. In this paper, we are involved in using only one ultrasonic sensor to detect stair-cases in electronic cane. In this context, no previous work has considered such a challenge. Aware that the performance of an object recognition system depends on both object representation and classification algorithms, we have used in our system, one representation of ultrasonic signal in frequency domain: spectrogram representation explaining how the spectral density of signal varies with time, spectrum representation showing the amplitudes as a function of the frequency, periodogram representation estimating the spectral density of signal. Several features, thus extracted from each representation, contribute in the classification process. Our system was evaluated on a set of ultrasonic signal where stair-cases occur with different shapes. Using a multiclass SVM approach, recognition rates of 82.4% has been achieved.

Author 1: Sonda Ammar Bouhamed
Author 2: Imene Khanfir Kallel
Author 3: Dorra Sellami Masmoudi

Keywords: Electronic white cane; ultrasonic signal processing; ground-stair classification ;temporal representation of ultrasonic signal; frequencial representation of ultrasonic signal

Download PDF

Paper 34: Watermarking in E-commerce

Abstract: A major challenge for E-commerce and content-based businesses is the possibility of altering identity documents or other digital data. This paper shows a watermark-based approach to protect digital identity documents against a Print-Scan (PS) attack. We propose a secure ID card authentication system based on watermarking. For authentication purposes, a user/customer is asked to upload a scanned picture of a passport or ID card through the internet to fulfill a transaction online. To provide security in online ID card submission, we need to robustly encode personal information of ID card’s holder into the card itself, and then extract the hidden information correctly in a decoder after the PS operation. The PS operation imposes several distortions, such as geometric, rotation, and histogram distortion, on the watermark location, which may cause the loss of information in the watermark. An online secure authentication system needs to first eliminate the distortion of the PS operation before decoding the hidden data. This study proposes five preprocessing blocks to remove the distortions of the PS operation: filtering, localization, binarization, undoing rotation, and cropping. Experimental results with 100 ID cards showed that the proposed online ID card authentication system has an average accuracy of 99% in detecting hidden information inside ID cards after the PS process. The innovations of this study are the implementation of an online watermark-based authentication system which uses a scanned ID card picture without any added frames around the watermark location, unlike previous systems.

Author 1: Peyman Rahmati
Author 2: Andy Adler
Author 3: Thomas Tran

Keywords: Data hiding; geometric distortion; watermarking; print-and-scan; and E-commerce Introduction

Download PDF

Paper 35: A Novel Software Tool for Analysing NT® File System Permissions

Abstract: Administrating and monitoring New Technology File System (NTFS) permissions can be a cumbersome and convoluted task. In today’s data rich world there has never been a more important time to ensure that data is secured against unwanted access. This paper identifies the essential and fundamental requirements of access control, highlighting the main causes of their misconfiguration within the NTFS. In response, a number of features are identified and an efficient, informative and intuitive software-based solution is proposed for examining file system permissions. In the first year that the software has been made freely available it has been downloaded and installed by over four thousand users1.

Author 1: Simon Parkinson
Author 2: Andrew Crampton

Download PDF

Paper 36: Probabilistic Distributed Algorithm for Uniform Election in Triangular Grid Graphs

Abstract: Probabilistic algorithms are designed to handle problems that do not admit deterministic effective solutions. In the case of the election problem, many algorithms are available and applicable under appropriate assumptions, for example: the uniform election in trees, k??trees and polyominoids. In this paper, first, we introduce a probabilistic algorithm for the uniform election in the triangular grid graphs, then, we expose the set of rules that generate the class of the triangular grid graphs. The main of this paper is devoted to the analysis of our algorithm. We show that our algorithm is totally fair in so far as it gives the same probability to any vertex of the given graph to be elected.

Author 1: El Mehdi Stouti
Author 2: Ismail Hind
Author 3: Abdelaaziz El Hibaoui

Keywords: Uniform Election, Distributed Algorithms, Probabilistic Election, Markov Process, Randomized Algorithm Analysis.

Download PDF

Paper 37: Correlated Topic Model for Web Services Ranking

Abstract: With the increasing number of published Web services providing similar functionalities, it’s very tedious for a service consumer to make decision to select the appropriate one according to her/his needs. In this paper, we explore several probabilistic topic models: Probabilistic Latent Semantic Analysis (PLSA), Latent Dirichlet Allocation (LDA) and Correlated Topic Model (CTM) to extract latent factors from web service descriptions. In our approach, topic models are used as efficient dimension reduction techniques, which are able to capture semantic relationships between word-topic and topic-service interpreted in terms of probability distributions. To address the limitation of keywords-based queries, we represent web service description as a vector space and we introduce a new approach for discovering and ranking web services using latent factors. In our experiment, we evaluated our Service Discovery and Ranking approach by calculating the precision (P@n) and normalized discounted cumulative gain (NDCGn).

Author 1: Mustapha AZNAG
Author 2: Mohamed QUAFAFOU
Author 3: Zahi JARIR

Keywords: Web service, Data Representation, Discovery, Ranking, Machine Learning, Topic Models

Download PDF

Paper 38: Wideband Parameters Analysis and Validation for Indoor radio Channel at 60/70/80GHz for Gigabit Wireless Communication employing Isotropic, Horn and Omni directional Antenna

Abstract: recently, applications of millimeter (mm) waves for high-speed broadband wireless local area network communication systems in indoor environment are increasingly gaining recognition as it provides gigabit-speed wireless communications with carrier-class performances over distances of a mile or more due to spectrum availability and wider bandwidth requirements. Collectively referred to as E-Band, the millimeter wave wireless technology present the potential to offer bandwidth delivery comparable to that of fiber optic, but without the financial and logistic challenges of deploying fiber. This paper investigates the wideband parameters using the ray tracing technique for indoor propagation systems with rms delay spread for Omni-directional and Horn Antennas for Bent Tunnel at 80GHz. The results obtained were 2.03and 1.95 respectively, besides, the normalized received power with 0.55×?10?^8excess delay at 70GHz for Isotropic Antenna which was at 0.97.

Author 1: E. Affum
Author 2: E.T. Tchao
Author 3: K. Diawuo
Author 4: K. Agyekum

Keywords: Indoor; Wideband; Isotropic; rms Delay; Power delay Profile; Excess delay

Download PDF

Paper 39: Smart Grid Network Transmission Line RLC Modelling Using Random Power Line Synthesis Scheme

Abstract: This work proposes Random Power line Synthesis (RPLS) as a quicker computational approach to solving RLC parameters of a modern smart grid transmission network. Since modern grid systems provide a holistic perspective of modern grid development, it is obvious that a transmission network that is ageing cannot serve the expanded load demand. The need to revoltionalize the traditional transmission model while exploiting basic electrical theories and principles in Smart Grid (SG) architecture necessitated this paper. This work seeks to address the RLC parameter modelling for SG template to provision dynamic power in Nigerian context. Other schemes of transmission RLC modelling were studied as well as outlining their limitations. Consequently, we then proposed a fuzzy smart grid framework for RLC computation and developed a proposed SG overhead transmission line from its conductor characteristics and tower geometry considering the RLC parameters of the conductor while applying RPLS to generate the parameter metrics.

Author 1: Ezennaya S.O
Author 2: Udeze C. C
Author 3: Okafor K .C
Author 4: Onyedikachi S.N
Author 5: Anierobi C.C

Keywords: RPLS; Smart Grid; Overhead; Conductor; RLC parameters.

Download PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. Registered in England and Wales. Company Number 8933205. All rights reserved. thesai.org