Iris Recognition Using Modified Fuzzy Hypersphere Neural Network with Different Distance Measures

—In this paper we describe Iris recognition using Modified Fuzzy Hypersphere Neural Network (MFHSNN) with its learning algorithm, which is an extension of Fuzzy Hypersphere Neural Network (FHSNN) proposed by Kulkarni et al. We have evaluated performance of MFHSNN classifier using different distance measures. It is observed that Bhattacharyya distance is superior in terms of training and recall time as compared to Euclidean and Manhattan distance measures. The feasibility of the MFHSNN has been successfully appraised on CASIA database with 756 images and found superior in terms of generalization and training time with equivalent recall time.


INTRODUCTION
Iris recognition has become the dynamic theme for security applications, with an emphasis on personal identification based on biometrics.Other biometric features include face, fingerprint, palm-prints, iris, retina, gait, hand geometry etc.All these biometric features are used in security applications [1].The human iris, the annular part between pupil and sclera, has distinctly unique features such as freckles, furrows, stripes, coronas and so on.It is visible from outside.Personal authentication based on iris can obtain high accuracy due to rich texture of iris patterns.Many researchers work has also affirmed that the iris is essentially stable over a person's life.Since the iris based personal identification systems can be more noninvasive for the users [2].
Iris boundaries can be supposed as two non-concentric circles.We must determine the inner and outer boundaries with their relevant radius and centers.Iris segmentation is to locate the legitimate part of the iris.Iris is often partially occluded by eyelashes, eyelids and shadows.In segmentation, it is desired to discriminate the iris texture from the rest of the image.An iris is normally segmented by detecting its inner areas (pupil) and outer (limbus) boundaries [3] [4].Well-known methods such as the Integro-differential, Hough transform and active contour models have been successful techniques in detecting the boundaries.In 1993, Daugman proposed an integrodifferential operator to find both the iris inner and outer borders Wildes represented the iris texture with a laplacian pyramid constructed with four different resolution levels and used the normalized correlation to determine whether the input image and the model image are from the same class [5].O. Byeon and T. Kim decomposed an iris image into four levels using 2DHaar wavelet transform and quantized the fourth-level high frequency information to form an 87-bit code.A modified competitive learning neural network (LVQ) was used for classification [6].J. Daugman used multiscale quadrature wavelets to extract texture phase structure information of the iris to generate a 2048-bit iris code and he compared the difference between a pair of iris representations by computing their Hamming distance [7].Tisse.used a combination of the integro-differential operators with a Hough Transform for localization and for feature extraction the concept of instantaneous phase or emergent frequency is used.Iris code is generated by thresholding both the models of emergent frequency and the real and imaginary parts of the instantaneous phase [8].The comparison between iris signatures is performed, producing a numeric dissimilarity value.If this value is higher than a threshold, the system generates output as a nonmatch, meaning that each iris patterns belongs to different irises [9].In this paper, we have applied MFHSNN classifier which is a Modification of Fuzzy Hypersphere Neural Network (FHSNN) proposed by Kulkarni et al. [10].Ruggero Donida Labati, et al. had represented the detection of the iris center and boundaries by using neural networks.The proposed algorithm starts by an initial random point in the input image, and then it processes a set of local image properties in a circular region of interest searching for the peculiar transition patterns of the iris boundaries.A trained neural network processes the parameters associated to the extracted boundaries and it estimates the offsets in the vertical and horizontal axis with respect to the estimated center [12].

II. TOPOLOGY OF MODIFIED FUZZY HYPERSPHERE NEURAL NETWORK
The MFHSNN consists of four layers as shown in Fig 1(a).The first, second, third and fourth layer is denoted as , F respectively.The R F layer accepts an input pattern and consists of n processing elements, one for each dimension of the pattern.The M F layer consists of q processing nodes that are constructed during training and each node represents hypersphere fuzzy set characterized by hypersphere membership function [11]

 
Each o F node delivers non-fuzzy output described as, 0,

III. MFHSNN LEARNING ALGORITHM
The supervised MFHSNN learning algorithm for creating fuzzy hyperspheres in hyperspace consists of three steps.

A. Creating of HSs
Given the th h training pair ( , ) hh Rd find all the hyperspheres belonging to the class h d .These hyperspheres are arranged in ascending order according to the distances between the input pattern and the center point of the www.ijacsa.thesai.orghyperspheres.After this following steps are carried sequentially for possible inclusion of input pattern h R .
Step 1: Determine whether the pattern h R is contained by any one of the hyperspheres.This can be verified by using fuzzy hypersphere membership function defined in equation ( 4).If h R is contained by any of the hypersphere then it is included, therefore in the training process all the remaining steps are skipped and training is continued with the next training pair.
Step 2: If the pattern k R falls outside the hypersphere, then the hypersphere is expanded to include the pattern if the expansion criterion is satisfied.For the hypersphere j m to include R h the following constraint must be met.
Here we have proposed a new approach for testing expansion of new hyperspheres based on Bhattacharyya distance which is sum of the absolute differences yields superior results as compared to Euclidian distance and Manhattan distance.

If the expansion criterion is met then pattern
Step 3: If the pattern k R is not included by any of the above steps then new hypersphere is created for that class, which is described as

B. Overlap Test
The learning algorithm allows overlap of hyperspheres from the same class and eliminates the overlap between hyperspheres from different classes.Therefore, it is necessary to eliminate overlap between the hyperspheres that represent different classes.Overlap test is performed as soon as the hypersphere is expanded by step 2 or created in step 3.
  means two hyperspheres from different classes are overlapping.

IV. IRIS SEGMENTATION AND FEATURE EXTRACTION
Iris Segmentation plays very important role for detecting the iris patterns, Segmentation is to locate valid part of the iris for iris biometric [3].Finding the inner and outer boundaries (pupillary and limbic) are as shown in the Fig 5(a).Localizing it's upper and lower eyelids if they occlude, and detecting and excluding any overlaid occlusion of eyelashes and their reflection.The best known algorithm for iris segmentation is Daugman's intergro-differential operator to find boundaries of iris as defined.Iris has a particularly interesting structure and provides rich texture information.Here we have implemented principal component analysis method for feature extraction which captures local underlying information from an isolated image.The result of this method yields high dimension feature vector.To improve training and recalling efficiency of the network, here we have used Singular value decomposition (SVD) method to reduce the dimensionality of the feature vector, and MFHSNN is used for Classification.SVD is a method for identifying and ordering the dimensions along which the feature exhibit the most variation [13].The most variation are identified the best approximation of the original data points using less dimensions.This can be described as: the covariance matrix is be defined as: U is a nm  matrix.SVD performs repetitively order the singular values in descending order, if nm  , the first n columns in U corresponds to the sorted eigenvalues of C and if the first m corresponds to the sorted non-zero eigenvalues of .C The transformed data can thus be written as: Where T U U is a simple n x m matrix which is one on the diagonal and zero.Hence Equation 13 is decomposition of equation 11.

V. EXPERIMENTAL RESULTS
CASIA Iris Image Database Version 1.0 (CASIA-IrisV1) includes 756 iris images from 108 eyes.For each eye, 7 images are captured in two sessions, where three samples are collected in the first session and four samples in second.For each iris class, we choose two samples from each session for training and remaining as testing samples.Therefore, there are 540 images for training and 216 images for testing.The timing analysis of training and recall, recognition rates in terms of number of hyperspheres, radius and recognition rate are depicted in Table 1 and Table 2   In this paper, we described an iris recognition algorithm using MFHSNN which has ability to learn patterns faster by creating /expanding HSs.It has been verified on CASIA database the result is as shown in Table 1 and Table II.MFHSNN can also be adapted for applications in some other pattern recognition problems.Our future work will be to improve the iris recognition rate using fuzzy neural networks.

Figure 1 . 12 (
Figure 1.(a) Modified Fuzzy Hypersphere Neural Network and argument l is defined as: pattern k R is contained by the k R hypersphere.The parameter ,0 , l  is a sensitivity parameter, which governs how fast the membership value decreases outside the hypersphere when the distance between h R and j C increases The sample plot of membership function with centre point [0.5 0.5] and radius equal to 0.3 is shown in as shown in Fig 2.

Figure 2 .F 1
Figure 2. Plot of Modified Fuzzy Hypersphere membership function for γ=1 Each node of N F and o F layer represents a class.The N F

Overlap test for step 2 :
Let the hypersphere is expanded to include the input pattern k R and expansion has created overlap with the hypersphere, v m which belongs to other class, which belongs to other class.Suppose, 11 [ , ,........ ] represents center point and radius of the expanded hypersphere and ' ' ' 12 [ , ,....... ] are centre point and radius of the hypersphere of other class as depicted in Fig 3(a).Then if from separate classes are overlapping.a) Overlap test for step 3 If the created hypersphere falls inside the hypersphere of other class means there is an overlap.Suppose p m represents created hypersphere to include the input h R and q m represents the hypersphere of other class as shown in Fig 4b The presence of overlap in this case can be verified using the membership function defined in equation (1).If ( , , ) ( , , ) 1

Figure 4 .
Figure 4. (b) Status of hypersphere after removing an overlap in step 3.

Figure 5 .
Figure 5. (a) Inner and Outer boundaries are detected with different radius .