Comparative Study on Discrimination Methods for Identifying Dangerous Red Tide Species Based on Wavelet Utilized Classification Methods

Comparative study on discrimination methods for identifying dangerous red tide species based on wavelet utilized classification methods is conducted. Through experiments, it is found that classification performance with the proposed wavelet derived shape information extracted from the microscopic view of the phytoplankton is effective for identifying dangerous red tide species among the other red tide species rather than the other conventional texture, color information.


INTRODUCTION
Image retrieval success rate (search hit ratio) is not good enough due to a poor visual signature or image feature followed by a poor similarity measure as well as clustering and classification performance.There is some information which can be extracted from images.That is (1) Halftone, color, and spectral information, (2) Spatial information including shape, size, texture, etc., and (3) Relational information such as relation between objects and the other objects included in images.The conventional image retrieval methods use the color information such as HSV 1 : Hue, Saturation and Value (Intensity), RGB: Red, Green, and Blue, etc. as the spectral information.Meanwhile texture information is also used in conventional image retrieval methods as the spatial information.On the other hand, Bachattarian [1], Euclidian2 , Mahalanobis3 distance measures [2] are well known as the similarity or distance measure.Not only hierarchical 4 and non-hierarchical clustering 5 as well as Bayesian rule of classification 6 and Maximum Likelihood classification 7 , but also Vector quantization 8 , Support vector machine 9 , etc. are proposed and used for image retrievals.Relational information such as the relations among image portions or segments, semantic information, knowledge based information, relational similarity to classify semantic relations [3] etc. are tried to use in image retrievals.Spatial and spectral information derived from the images in concern is applicable image retrievals.There are some moment based spatial information extraction methods [4], [5], texture feature based spatial information extraction methods [6] and spectral information based image retrieval methods [7], [8], [9].Furthermore, some attempts are made for image retrievals with wavelet descriptor as a spatial information extraction [9], [10].In general, these conventional methods have not so good performance in terms of retrieval success rate.
All the spectral and spatial information are used in image retrieval except shape information.There are some trials to use shape information extracted from image using Fourier descriptor and the others.There are some definitions for Fourier descriptors.Zahn and Roskies proposed Z type descriptor [7] while Granlund proposed G type descriptor [11].Z type descriptor is defined as the cumulative angle changes of the contour points from the starting point is expanded with Fourier series while G type descriptor defined as the length between the contour points from the start point of contour line in concern is expanded with Fourier series.Both of descriptors have the following problems: (1) It is hard to express local properties, (2) It cannot represent the shape of contour when the shape is not closed, The results depend on the start point on the contour line in concern for tracking.On the other hand, Z type descriptor has another difficulty that the convergence speed is not fast so that it takes relatively large computational resources and the reproducibility of low frequency component is not good enough.Meanwhile, G type descriptor has another difficulty that Gibbs phenomenon [12] would occur at the end points of the closed curve of contour lines results in the end points cannot be preserved.
The shape descriptor proposed here is wavelet based descriptor not the Fourier type of descriptor.Therefore, the proposed wavelet based descriptor allows shape description through frequency-time analysis while Fourier based descriptor allows only frequency components representation of shape.There is some advantage for the wavelet based descriptor in shape information extraction rather than Fourier descriptor.Wavelet descriptor is proposed for best matching method to measure similarity between two feature vectors of the two shapes [9], [10].This is impractical for higher dimensional feature matching.Therefore, wavelet descriptors are more suitable for model-based object recognition than www.ijacsa.thesai.orgdata-driven shape retrieval, because for shape retrieval, which is usually conducted online, speed is essential.
Contour of the object extracted from the original image can be expressed with wavelet based descriptor.The proposed image retrieval method is based on the hue information and texture as well as the proposed wavelet described shape information of extracted objects to improve image retrieval success rate.
The following section describes the proposed image retrieval method followed by some experiments for reproducibility of the proposed wavelet descriptor in comparison to the conventional Fourier descriptor with several simple symmetrical and asymmetrical shapes.Then it is validated with the image database of phytoplankton [13].

II. PROPOSED METHOD A. Research Background
There are a plenty of red tide species.Small portion of red tide species can be listed up in Figure 1.Identifying these dangerous red tide species is important.It, however, is not so easy to classify because these three categories of red tide species are quite resemble.Usually, the local fishery research institutes measure red tide from the research vessels with microscope.They used to count the number of red tide with microscope camera acquired imagery data on the ship.Then identify the red tide species in the same time quickly.Even though human perception capability is superior to that by machine learning based automatic classification, there are some mistakes.The purpose of the research is to improve classification performance by using considerable features which can be extracted from the microscopic imagery data.

B. Process Flow of the Proposed Image Classification
Image classification method based on hue information [14] and wavelet description based shape information [15] as well as texture information of the objects extracted with dyadic wavelet transformation [16] is proposed.Object is assumed to be focused so that the frequency component in the object is relatively high in comparison to the other (background).Figure 3 shows the process flow of the proposed image classification method.
One of the image features of hue information (angle) is calculated for the entire image in the color image database.Dyadic wavelet transformation10 is also applied to the images then texture information is extracted from the transformed resultant image.Based on the Dyadic wavelet transformation, HH 11 image of edge is extracted from the original image.Morphological operations 12 , opening and closing are then applied to the edge extracted images to remove inappropriate isolated pixels and undesirable image defects.After that the www.ijacsa.thesai.orgresultant image is binarized with appropriate threshold then contour of the object is extracted.Then the Dyadic wavelet transformation is applied to the contour in order to extract shape information (Wavelet descriptor).After all, Euclidian distance between target image and the other candidate images in the color image database is calculated with extracted hue, texture and shape information then the closest image is retrieved.
(2) Then original image is also reconstructed with the low and high frequency components as follows, If a new parameter s[m] is employed, then lifting dyadic wavelet is defined as follows, D. Dyadic wavelet based descriptor (Shape information) Image classification method with hue and texture information is conventional.In the proposed method, another feature, shape information is employed.Fourier descriptor is used, in general, to represent shape information.Although Fourier descriptor represents frequency component of the contour line, location information cannot be described.In other words, Fourier descriptor does support only frequency analysis, and does not support time-frequency component analysis.
Wavelet descriptor which is proposed by this paper supports a time-frequency component analysis so that not only frequency component but also location of contour edge can be discussed [17].
Let u(i) be distance between a point in the closed object contour line and a certain point i on the line, then the closed object contour line can be represented as u(i), i=1,2,…,n.i=1 corresponds to 0 degree while i=n corresponds to 360 degree, respectively.u(i) can be converted with dyadic wavelet transformation.Then the contour line can be represented with high frequency component of the dyadic wavelet transformed sequence as is shown in E. Texture Information Also texture information is useful for discrimination.Texture information can be derived from dyadic wavelet transformation.Texture information is defined as high frequency component of pixel value derived from dyadic wavelet transformation.Dyadic wavelet transformation is applied to the 2x2 pixels defined in Figure 5. Pixel value of the pixel in the object is replaced to the high frequency component detected with dyadic wavelet.Thus image which represents texture information of the detected object image is generated [18].

F. Hue angle
Thus contour of the object is detected.Then Red, Green, and Blue: RGB of the original object image can be transformed to Hue, Saturation, and Intensity: HSV information.Hue information in unit of radian, in particular, is useful for discrimination of the target image classifications of phytoplankton images.HSV, on the other hand, is expressed in Figure 6 (Color coordinate system).
RGB to HSV conversion is also be expressed as follows, ), H ranges from 0 to 360, S ranges from 0 to 1, V ranges from 0 to 1, HSV representation and R, G, B also range from 0 to 1.These three features, hue, H, texture, xx and shape information, yy composes three dimensional feature space results in measurement of Euclidian distance between a query image and the images in previously created image database.Using the distance, a query image can be retrieved from the image in the database.Thus image classifications can be done with hue and texture information as well as shape information derived from dyadic wavelet descriptor.www.ijacsa.thesai.org

G. Alternative shape information (Fourier Descriptor)
The location coordinate is expressed in the complex plane representation for the G type of Fourier descriptor, that is, (11) Then space or time domain locations can be transformed with eth following equation, (12) It can be inversely transformed with the following equation, (13) If these transformation and inverse transformation is perfect, then the original shapes are completely reproduced.The reproducibility for the shapes of circle, triangle, square, and trapezium (asymmetric shape) of the proposed wavelet descriptor is better than that of the conventional Fourier descriptor as shown in Figure 7.
In the comparison, the original image is binarized and the contour is extracted.Then shape information is extracted with both Fourier descriptor (G-type) and Dyadic wavelet descriptor.After that, image is reconstructed with the extracted shape information then compares the reconstructed images with two descriptors, Fourier and Dyadic wavelet descriptors.The difference between the reconstructed contours and original image is shown in Table 1.Thus it is found that the reproducibility of Dyadic wavelet descriptor is better than the conventional Fourier descriptor.This method can be expanded to 3D object.Once 3D object image is acquired through scanning in roll/pitch/yaw directions with the appropriate step angle, and then contour lines of the acquired 2D images are extracted.After that the 3D object shape complexity is represented with the wavelet descriptor as a resultant image which includes series of the high frequency components derived from dyadic wavelet transformation as shown in Figure 4  It is an image and is totally visual.This image represents 3D object shape complexity as an index.Also this index is shift invariant and rotation invariant.Namely, the index is not changed even if 3D object is translated and rotated.

A. Euclidian Distance
One of the measures for classification performance evaluation is Euclidian distance among the classes in concern.Shorter Euclid distance implies a poor classification performance while longer distance means a good performance.
If the texture and hue information are used for classification, then Table 2 of Euclid distance is calculated while those of the case of utilizing wavelet descriptor and texture information is shown in Table 3.In Table 2 and 3, Euclid distance between class #5 of Chattnella_Antiqua and the others are listed because primary red tide species at this moment is Chattnella_Antiqua.www.ijacsa.thesai.orgAlso, Euclid distance between Chattnella_Antiqua and the other species are calculated and shown in Table 4 for the case of utilizing all these three features of wavelet descriptor, texture, and hue information together.

Figure 1 Figure 2
Figure 1 Photos of a portion of red tide species These red tide species can be classified into three categories, (a) Caution level of species, (b) Warning level of species, and (c) Dangerous species.Fishes and shells take these dangerous red tide species.After that human habitats eat the fishes and shells.Then such persons get a bad situation and have an illness condition.Therefore, these red tide species are classified into dangerous species.

Figure 3
Figure 3 Process flow of the proposed image classification method.C. Dyadic wavelet transformation Using dyadic wavelet, frequency component can be detected.Dyadic wavelet allows to separate frequency components keeping image size with that of original image.Dyadic wavelet is called as a binary wavelet and has high pass and low pass filter components, {h[k], g[k]} and reconstruction filter {h[k] ,g[k]}.Low and high frequency components, C n and d n are expressed as follows,

Figure 4 .Figure 4
Figure 4 Dyadic wavelet descriptor for representation of the closed object contour lines.

Figure 5 .
Figure 5. Detected object and 2x2 of matrix in the object to detect texture information with 2x2 of dyadic wavelet transformation.

Figure 6
Figure 6 Hue angle and Saturation as well as Value (Intensity).

Figure 7
Figure 7 Comparison of reproducibility of the shapes between Fourier and Dyadic wavelet descriptor.

TABLE I .
COMPARISON OF THE DIFFERENCE BETWEEN THE ORIGINAL AND RECONSTRUCTED CONTOURS WITH FOURIER AND DYADIC WAVELET DESCRIPTORS.