The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 2 Issue 5

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Using Merkle Tree to Mitigate Cooperative Black-hole Attack in Wireless Mesh Networks

Abstract: Security is always a major concern and a topic of hot discussion to users of Wireless Mesh Networks (WMNs). The open architecture of WMNs makes it very easy for malicious attackers to exploit the loopholes in the routing protocol. Cooperative Black-hole attack is a type of denial-of-service attack that sabotages the routing functions of the network layer in WMNs. In this paper we have focused on improving the security of one of the popular routing protocols among WMNs, Ad-hoc on demand distance vector (AODV) routing protocol and present a probable solution to this attack using Merkle hash tree.

Author 1: Shree Om
Author 2: Mohammad Talib

Keywords: WMN, MANET; Cooperative black-hole attack; AODV; Merkle tree; malicious; OWHF

PDF

Paper 2: QVT transformation by modelling - From UML Model to MD Model

Abstract: To provide a complete analysis of the organization, its business and its needs, it is necessary for leaders to have data that help decision making. Data warehouses are designed to meet such needs; they are an analysis and data management technology. This article describes an MDA (Model Driven Architecture) process that we have used to automatically generate the multidimensional schema of data warehouse. This process uses model transformation using several standards such as Unified Modeling Language, Meta-Object Facility, Query View Transformation, Object Constraint Language, ... From the UML model, especially the class diagram, a multidimensional model is generated as an XML file, the transformation is carried out by the QVT (Query View Transformation) language and the OCL (Object Constraint Language) Language. To validate our approach a case study is presented at the end of this work.

Author 1: I Arrassen
Author 2: A.Meziane
Author 3: R.Sbai
Author 4: M.Erramdani

Keywords: Datawarehouse; Model Driven Architecture; Multidimensional Modeling; Meta Model; Transformation rules; Query View Transformation

PDF

Paper 3: Fuzzy Particle Swarm Optimization with Simulated Annealing and Neighborhood Information Communication for Solving TSP

Abstract: In this paper, an effective hybrid algorithm based on Particle Swarm Optimization (PSO) is proposed for solving the Traveling Salesman Problem (TSP), which is a well-known NP-complete problem. The hybrid algorithm combines the high global search efficiency of fuzzy PSO with the powerful ability to avoid being trapped in local minimum. In the fuzzy PSO system, fuzzy matrices were used to represent the position and velocity of the particles in PSO and the operators in the original PSO position and velocity formulas were redefined. Two strategies were employed in the hybrid algorithm to strengthen the diversity of the particles and to speed up the convergence process. The first strategy is based on Neighborhood Information Communication (NIC) among the particles where a particle absorbs better historical experience of the neighboring particles. This strategy does not depend on the individual experience of the particles only, but also the neighbor sharing information of the current state. The second strategy is the use of Simulated Annealing (SA) which randomizes the search algorithm in a way that allows occasional alterations that worsen the solution in an attempt to increase the probability of escaping local optima. SA is used to slow down the degeneration of the PSO swarm and increase the swarm’s diversity. In SA, a new solution in the neighborhood of the original one is generated by using a designed ? search method. A new solution with fitness worse than the original solution is accepted with a probability that gradually decreases at the late stages of the search process. The hybrid algorithm is examined using a set of benchmark problems from the TSPLIB with various sizes and levels of hardness. Comparative experiments were made between the proposed algorithm and regular fuzzy PSO, SA, and basic ACO. The computational results demonstrate the effectiveness of the proposed algorithm for TSP in terms of the obtained solution quality and convergence speed.

Author 1: Rehab F Abdel-Kader

Keywords: Information Communication; Particle Swarm Optimization; Simulated Annealing; TSP.

PDF

Paper 4: Empirical Validation of Web Metrics for Improving the Quality of Web Page

Abstract: Web page metrics is one of the key elements in measuring various attributes of web site. Metrics gives the concrete values to the attributes of web sites which may be used to compare different web pages. The web pages can be compared based on the page size, information quality ,screen coverage, content coverage etc. Internet and website are emerging media and service avenue requiring improvements in their quality for better customer services for wider user base and for the betterment of human kind. E-business is emerging and websites are not just medium for communication, but they are also the products for providing services. Measurement is the key issue for survival of any organization Therefore to measure and evaluate the websites for quality and for better understanding, the key issues related to website engineering is very important. In this paper we collect data from webby awards data (2007-2010) and classify the websites into good sites and bad sites on the basics of the assessed metrics. To achieve this aim we investigate 15 metrics proposed by various researchers. We present the findings of quantitative analysis of web page attributes and how these attributes are calculated. The result of this paper can be used in quantitative studies in web site designing. The metrics captured in the predicted model can be used to predict the goodness of website design.

Author 1: Yogesh Singh
Author 2: Ruchika Malhotra
Author 3: Poonam Gupta

Keywords: Metrics; Web page; Website; Web page quality; Internet; Page composition Metrics; Page formatting Metrics.

PDF

Paper 5: Systematic and Integrative Analysis of Proteomic Data using Bioinformatics Tools

Abstract: The analysis and interpretation of relationships between biological molecules is done with the help of networks. Networks are used ubiquitously throughout biology to represent the relationships between genes and gene products. Network models have facilitated a shift from the study of evolutionary conservation between individual gene and gene products towards the study of conservation at the level of pathways and complexes. Recent work has revealed much about chemical reactions inside hundreds of organisms as well as universal characteristics of metabolic networks, which shed light on the evolution of the networks. However, characteristics of individual metabolites have been neglected in this network. The current paper provides an overview of bioinformatics software used in visualization of biological networks using proteomic data, their main functions and limitations of the software.

Author 1: Rashmi Rameshwari
Author 2: Dr. T. V. Prasad

Keywords: Metabolic network; protein interaction network; visualization tools.

PDF

Paper 6: A Conceptual Framework for an Ontology-Based Examination System

Abstract: There is an increasing reliance on the web for many software application deployments. Millions of services ranging from commerce, education, tourism and entertainment are now available on the web, making the web to be the largest database in the world as of today. However, the information available on the web is syntactically structured whereas the trend is to provide semantic linkage to them. The semantic web serves as a medium to enhance the current web in which computers can process information, interpret, and connect it to enhance knowledge retrieval. The semantic web has encouraged the creation of ontologies in a great variety of domains. In this paper, the conceptual framework for an ontology-based examination system and the ontology required for such examination systems were described. The domain ontology was constructed based on the Methontology method proposed by Fernández (1997). The ontology can be used to design and create metadata elements required developing web-based examination applications and can be interoperate-able with other applications. Taxonomic evaluation and the Guarino-Welty Ontoclean techniques were used to assess and refined the domain ontology in other to ensure it is error-free.

Author 1: Adekoya Adebayo Felix
Author 2: Akinwale Adio Taofiki
Author 3: Sofoluwe Adetokunbo

Keywords: semantic web; examination systems; ontology; knowledge bases.

PDF

Paper 7: Design and Analysis of a Novel Low-Power SRAM Bit-Cell Structure at Deep-Sub-Micron CMOS Technology for Mobile Multimedia Applications

Abstract: The growing demand for high density VLSI circuits and the exponential dependency of the leakage current on the oxide thickness is becoming a major challenge in deep-sub-micron CMOS technology. In this work, a novel Static Random Access Memory (SRAM) Cell is proposed targeting to reduce the overall power requirements, i.e., dynamic and standby power in the existing dual-bit-line architecture. The active power is reduced by reducing the supply voltage when the memory is functional and the standby power is reduced by reducing the gate and sub-threshold leakage currents when the memory is idle. This paper explored an integrated approach at the architecture and circuit level to reduce the leakage power dissipation while maintaining high performance in deep-submicron cache memories. The proposed memory bit-cell makes use of the pMOS pass transistors to lower the gate leakage currents while full-supply body-biasing scheme is used to reduce the sub-threshold leakage currents. To further reduce the leakage current, the stacking effect is used by switching off the stack transistors when the memory is ideal. In comparison to the conventional 6T SRAM bit-cell, the total leakage power is reduced by 50% while the cell is storing data ‘1’ and 46% when data ‘0’ at a very small area penalty. The total active power reduction is achieved by 89% when cell is storing data 0 or 1. The design simulation work was performed on the deep-sub-micron CMOS technology, the 45nm, at 250C with VDD of 0.7V.

Author 1: Neeraj kr Shukla
Author 2: R.K.Singh
Author 3: Manisha Pattanaik

Keywords: SRAM Bit-Cell; Gate Leakage; Sub-threshold Leakage; NC-SRAM; Asymmetric SRAM; PP-SRAM; Stacking Effect.

PDF

Paper 8: Generating Performance Analysis of GPU compared to Single-core and Multi-core CPU for Natural Language Applications

Abstract: In Natural Language Processing (NLP) applications, the main time-consuming process is string matching due to the large size of lexicon. In string matching processes, data dependence is minimal and hence it is ideal for parallelization. A dedicated system with memory interleaving and parallel processing techniques for string matching can reduce this burden of host CPU, thereby making the system more suitable for real-time applications. Now it is possible to apply parallelism using multi-cores on CPU, though they need to be used explicitly to achieve high performance. Recent GPUs hold a large number of cores, and have a potential for high performance in many general purpose applications. Programming tools for multi-cores on CPU and a large number of cores on GPU have been formulated, but it is still difficult to achieve high performance on these platforms. In this paper, we compare the performance of single-core, multi-core CPU and GPU using such a Natural Language Processing application.

Author 1: Shubham Gupta
Author 2: M.Rajasekhara Babu

Keywords: NLP; Lexical Analysis; Lexicon; Shallow Parsing; GPU; GPGPU; CUDA; OpenMP.

PDF

Paper 9: Main Stream Temperature Control System Based on Smith-PID Scheduling Network Control

Abstract: This paper is concerned with the controller design problem for a class of networked main stream temperature control system with long random time delay and packet losses. To compensate the effect of time delay and packet losses, a gain-scheduling based Smith-PID controller is proposed for the considered networked control systems (NCSs). Moreover, to further improve the control performance of NCSs, genetic algorithm is employed to obtain the optimal control parameters for gain-scheduling Smith-PID controller. Simulation results are given to demonstrate the effectiveness of the proposed methods.

Author 1: Jianqiu Deng
Author 2: Haijun Li
Author 3: Zhengxia Zhang

Keywords: Network control systems (NCSs); Gain-scheduling based Smith-PID; main stream temperature system; time delay; packet loss.

PDF

Paper 10: FPGA-Based Design of High-Speed CIC Decimator for Wireless Applications

Abstract: In this paper an efficient multiplier-less technique is presented to design and implement a high speed CIC decimator for wireless applications like SDR and GSM. The Cascaded Integrator Comb is a commonly used decimation filter which performs sample rate conversion (SRC) using only additions/subtractions. The implementation is based on efficient utilization of embedded LUTs of the target device to enhance the speed of proposed design. It is an efficient method used to design and implement CIC decimator because the use of embedded LUTs not only increases the speed but also saves the resources on the target device. The fully pipelined CIC decimator is designed with Matlab, simulated with Xilinx AccelDSP, synthesized with Xilinx Synthesis Tool (XST), and implemented on Virtex-II based XC2VP50-6 target FPGA device. The proposed design can operate at an estimated frequency of 276.6 MHz by consuming considerably less resources on target device to provide cost effective solution for SDR based wireless applications.

Author 1: Rakjesh Mehra
Author 2: Rashmi Arora

Keywords: CIC; FPGA; FPGA; GSM; LUT; SDR

PDF

Paper 11: Implementation and Performance Analysis of Video Edge Detection System on Multiprocessor Platform

Abstract: This paper presents an agile development, implementation of Edge Detection on SMT8039 based Video And Imaging module. With the development of video processing techniques its algorithm becomes more complicated. High resolution and real time application cannot be implemented with single CPU or DSP. The system offers significant performance increase over current programmable DSP-based implementations. This paper shows that the considerable performance improvement using the FPGA solution results from the availability of high I/O resources and pipelined architecture. FPGA technology provides an alternative way to obtain high performance. Prototyping a design with FPGA offer some advantages such as relatively low cost, reduce time to market, flexibility. Another capability of FPGA is the amount of support of logic to implement complete systems/subsystems and provide reconfigurable logic for purpose of application specific based programming. DSP’s to provide more and more power and design nearly any function in a large enough FPGA, this is not usually the easiest, cheapest approach. This paper designed and implemented an Edge detection method based on coordinated DSP-FPGA techniques. The whole processing task divided between DSP and FPGA. DSP is dedicated for data I/O functions. FPGA’s task is to take input video from DSP to implement logic and after processing it gives back to DSP. The PSNR values of the all the edge detection techniques are compared. When the system is validated, it is observed that Laplacian of Gaussian method appears to be the most sensitive even in low levels of noise, while the Robert, Canny and Prewitt methods appear to be barely perturbed. However, Sobel performs best with median filter in the presence of Gaussian, Salt and Pepper, Speckle noise in video signal.

Author 1: Mandeep Kaur
Author 2: Kulbir Singh

Keywords: Multiprocessor platform; Edge detection; Performance evaluation; noise.

PDF

Paper 12: A robust multi color lane marking detection approach for Indian scenario

Abstract: Lane detection is an essential component of Advanced Driver Assistance System. The cognition on the roads is increasing day by day due to increase in the four wheelers on the road. The cognition coupled with ignorance towards road rules is contributing to road accidents. The lane marking violence is one of the major causes for accidents on highways in India. In this work we have designed and implemented an automatic lane marking violence detection algorithm in real time. The HSV color-segmentation based approach is verified for both white lanes and yellow lanes in Indian context. Various comparative experimental results show that the proposed approach is very effective in the lane detection and can be implemented in real-time.

Author 1: L N P Boggavarapu
Author 2: R S Vaddi
Author 3: H D Vankayalapati
Author 4: J K Munagala

Keywords: Color segmentation; HSV; Edge orientation; connected components.

PDF

Paper 13: A Comprehensive Analysis of Materialized Views in a Data Warehouse Environment

Abstract: Data in a warehouse can be perceived as a collection of materialized views that are generated as per the user requirements specified in the queries being generated against the information contained in the warehouse. User requirements and constraints frequently change over time, which may evolve data and view definitions stored in a data warehouse dynamically. The current requirements are modified and some novel and innovative requirements are added in order to deal with the latest business scenarios. In fact, data preserved in a warehouse along with these materialized views must also be updated and maintained so that they can deal with the changes in data sources as well as the requirements stated by the users. Selection and maintenance of these views is one of the vital tasks in a data warehousing environment in order to provide optimal efficiency by reducing the query response time, query processing and maintenance costs as well. Another major issue related to materialized views is that whether these views should be recomputed for every change in the definition or base relations, or they should be adapted incrementally from existing views. In this paper, we have examined several ways o performing changes in materialized views their selection and maintenance in data warehousing environments. We have also provided a comprehensive study on research works of different authors on various parameters and presented the same in a tabular manner.

Author 1: Garima Thakur
Author 2: Anjana Gosain

Keywords: Materialized views; view maintenance; view selection; view adaptation; view synchronization.

PDF

Paper 14: A Routing Scheme for a New Irregular Baseline Multistage Interconnection Network

Abstract: Parallel processing is an efficient form of information processing system, which emphasizes the exploitation of concurrent events in the computing process. To achieve parallel processing it’s required to develop more capable and cost-effective systems. In order to operate more efficiently a network is required to handle large amount of traffic. Multi-stage Interconnection Network plays a vital role on the performance of these multiprocessor systems. In this paper an attempt has been made to analyze the characteristics of new class of Irregular Fault-Tolerant Multistage Interconnection network named as Irregular Modified Baseline Multistage Interconnection network IMABN and an efficient routing procedure has been defined to study the fault-tolerance of the network. Fault-Tolerance in an interconnection network is very important for its continuous operation over a relatively long period of time. Fault-Tolerance is an ability of the network to operate in presence of multiple faults. The behavior of the Proposed IMABN has been analyzed and compared with regular network MABN under fault free conditions and in presence of faults. In IMABN there are six possible paths between source and destinations where as MABN has only four. Thus the proposed IMABN is more Fault-tolerant than existing regular Modified Augmented Baseline multistage interconnection network (MABN).

Author 1: Mamta Ghai

Keywords: Multistage Interconnection network; Fault-Tolerance; Augmented Baseline Network.

PDF

Paper 15: Application of Fuzzy Logic Approach to Software Effort Estimation

Abstract: The most significant activity in software project management is Software development effort prediction. The literature shows several algorithmic cost estimation models such as Boehm’s COCOMO, Albrecht's' Function Point Analysis, Putnam’s SLIM, ESTIMACS etc., but each model do have their own pros and cons in estimating development cost and effort. This is because project data, available in the initial stages of project is often incomplete, inconsistent, uncertain and unclear. The need for accurate effort prediction in software project management is an ongoing challenge. A fuzzy model is more apt when the systems are not suitable for analysis by conventional approach or when the available data is uncertain, inaccurate or vague. Fuzzy logic is a convenient way to map an input space to an output space. Fuzzy Logic is based on fuzzy set theory. A fuzzy set is a set without a crisp, clearly defined boundary. It is characterized by a membership function, which associates with each point in the fuzzy set a real number in the interval [0, 1], called degree or grade of membership. The membership functions may be Triangular, GBell, Gauss and Trapezoidal etc. In the present paper, software development effort prediction using Fuzzy Triangular Membership Function and GBell Membership Function is implemented and compared with COCOMO. A case study based on the NASA93 dataset compares the proposed fuzzy model with the Intermediate COCOMO. The results were analyzed using different criterions like VAF, MARE, VARE, MMRE, Prediction and Mean BRE. It is observed that the Fuzzy Logic Model using Triangular Membership Function provided better results than the other models.

Author 1: Prasad Reddy P.V.G.D.
Author 2: Sudha K. R
Author 3: Rama Sree P

Keywords: Development Effort; EAF; Cost Drivers; Fuzzy Identification; Membership Functions; Fuzzy Rules; NASA93 dataset.

PDF

Paper 16: A Load Balancing Policy for Heterogeneous Computational Grids

Abstract: Computational grids have the potential computing power for solving large-scale scientific computing applications. To improve the global throughput of these applications, workload has to be evenly distributed among the available computational resources in the grid environment. This paper addresses the problem of scheduling and load balancing in heterogeneous computational grids. We proposed a two-level load balancing policy for the multi-cluster grid environment where computational resources are dispersed in different administrative domains or clusters which are located in different local area networks. The proposed load balancing policy takes into account the heterogeneity of the computational resources. It distributes the system workload based on the processing elements capacity which leads to minimize the overall job mean response time and maximize the system utilization and throughput at the steady state. An analytical model is developed to evaluate the performance of the proposed load balancing policy. The results obtained analytically are validated by simulating the model using Arena simulation package. The results show that the overall mean job response time obtained by simulation is very close to that obtained analytically. Also, the simulation results show that the performance of the proposed load balancing policy outperforms that of the random and uniform distribution load balancing policies in terms of mean job response time. The improvement ratio increases as the system workload increases and the maximum improvement ratio obtained is about 72% in the range of system parameter values examined.

Author 1: Said Fathy El-Zoghdy

Keywords: grid computing; resource management; load balancing; performance evaluation; queuing theory; simulation models.

PDF

Paper 17: Generating PNS for Secret Key Cryptography Using Cellular Automaton

Abstract: The paper presents new results concerning application of cellular automata (CAs) to the secret key using vernam cipher cryptography.CA are applied to generate pseudo-random numbers sequence (PNS) which is used during the encryption process. One dimensional, non uniform CAs is considered as a generator of pseudorandom number sequences (PNSs) used in cryptography with the secret key. The quality of PNSs highly depends on a set of applied CA rules. Rules of radius r = 1 and 2 for nonuniform one dimensional CAs have been considered. The search of rules is performed with use of evolutionary technique called cellular programming. As the result of collective behavior of discovered set of CA rules very high quality PNSs are generated. The quality of PNSs outperforms the quality of known one dimensional CA-based PNS generators used in the secret key cryptography. The extended set of CA rules which was found makes the cryptography system much more resistant on breaking a cryptography key.

Author 1: Bijayalaxmi Kar
Author 2: D.Chandrasekhra Rao
Author 3: Dr. Amiya Kumar Rath

Keywords: Cellular automata; Cellular programming; Random number generators; Symmetric key; cryptography; Vernam cipher

PDF

Paper 18: Computerised Speech Processing in Hearing Aids using FPGA Architecture

Abstract: The development of computerized speech processing system is to mimic the natural functionality of human hearing, because of advent of technology that used Very Large Scale Integration (VLSI) devices such as Field Programmable Gate Array (FPGA) to meet the challenging requirement of providing 100% functionality of the damaged human hearing parts. Here a computerized laboratory speech processor based on Xilinx Spartan3 FPGA system was developed for hearing aids research and also presented comparison details of the commercially available Hearing Aids. The hardware design and software implementation of the speech processor are described in detail. The FPGA based speech processor is capable of providing high-rate stimulation with 12 electrodes against conventional 8 electrodes in earlier research. Using short biphasic pulses presented either simultaneously or in an interleaved fashion. Different speech processing algorithms including the Continuous Interleaved Sampling (CIS) strategy were implemented in this processor and tested successfully.

Author 1: V. Hanuman Kumar
Author 2: P. Seetha Ramaiah

Keywords: Speech processing system; VLSI; FPGA; CIS.

PDF

Paper 19: A Neural Approach for Reliable and Fault Tolerant Wireless Sensor Networks

Abstract: This paper presents a neural model for reliable and fault tolerant transmission in Wireless Sensor Networks based on Bi-directional Associative Memory. The proposed model is an attempt to enhance the performances of both the cooperative and non cooperative Automatic Repeat Request (ARQ) schemes in terms of reliability and fault tolerance. We have also demonstrated the performances of both the schemes with the help of suitable examples.

Author 1: Vijay Kumar
Author 2: R. B. Patel
Author 3: Manpreet Singh
Author 4: Rohit Vaid

Keywords: Reliability; Fault tolerance; Bi-directional Associative Memory; Wireless Sensor Network

PDF

Paper 20: Deploying an Application on the Cloud

Abstract: Cloud Computing, the impending need of computing as an optimal utility, has the potential to take a gigantic leap in the IT industry, is structured and put to optimal use with regard to the contemporary trends. Developers with innovative ideas need not be apprehensive about non utility of costly resources for the service which does not cater to the need and anticipations. Cloud Computing is like a panacea to overcome the hurdles. It promises to increase the velocity with which the applications are deployed, increased creativity, innovation, lowers cost all the while increasing business acumen. It calls for less investment and a harvest of benefits. The end-users only pay for the amount of resources they use and can easily scale up as their needs grow. Service providers, on the other hand, can utilize virtualization technology to increase hardware utilization and simplify management. People want to move large scale grid computations that they used to run on traditional clusters into centrally managed environment, pay for use and be done with it .This paper deals at length with regard to the cloud, cloud computing and its myriad applications.

Author 1: N. Ram Ganga Charan
Author 2: S. Tirupati Rao
Author 3: Dr .P.V.S Srinivas

Keywords: Cloud; Virtualization; EC2; IAAS; PAAS; SAAS; CAAS; DAAS; public cloud; private cloud; hybrid cloud; Community cloud;

PDF

Paper 21: Radial Basis Function For Handwritten Devanagari Numeral Recognition

Abstract: The task of recognizing handwritten numerals, using a classifier, has great importance. This paper applies the technique of Radial Basis Function for handwritten numeral recognition of Devanagari Script. Lot of work has been done on Devanagari numeral recognition using different techniques for increasing the accuracy of recognition. Since the database is not globally created, firstly we created the database by implementing pre-processing on the set of training data. Then by the use of Principal Component Analysis we have extracted the features of each image, some researchers have also used density feature extraction. Since different people have different writing style, so here we are trying to form a system where recognition of numeral becomes easy. Then at the hidden layer centers are determined and the weights between the hidden layer and the output layer of each neuron are determined to calculate the output, where output is the summing value of each neuron. In this paper we have proposed an algorithm for determining Devanagari numeral recognition using the above mentioned system

Author 1: Prerna Singh
Author 2: Nidhi Tyagi

Keywords: Radial Basis Function; Devanagari Numeral Recog- nition; K-means clustering; Principal Component Analysis (PCA)

PDF

Paper 23: A Performance Study of Some Sophisticated Partitioning Algorithms

Abstract: Partitioning is a central component of the Quicksort which is an intriguing sorting algorithm, and is a part of C, C++ and Java libraries. Partitioning is a key component of Quicksort, on which the performance of Quicksort ultimately depends. There have been some elegant partitioning algorithms; Profound understanding of prior may be needed if one has to choose among those partitioning algorithms. In this paper we undertake a careful study of these algorithms on modern machines with the help of state of the art performance analyzers, choose the best partitioning algorithm on the basis of some crucial performance indicators.

Author 1: D Abhyankar
Author 2: M.Ingle

Keywords: Quicksort; Hoare Partition; Lomuto Partition; AQTime.

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org