• 沒有找到結果。

Image retrieval and classification using adaptive local binary patterns based on texture features

N/A
N/A
Protected

Academic year: 2021

Share "Image retrieval and classification using adaptive local binary patterns based on texture features"

Copied!
9
0
0

加載中.... (立即查看全文)

全文

(1)

Published in IET Image Processing Received on 14th September 2011 Revised on 7th July 2012

doi: 10.1049/iet-ipr.2011.0445

ISSN 1751-9659

Image retrieval and classification using adaptive

local binary patterns based on texture features

C.-H. Lin

1

C.-W. Liu

2

H.-Y. Chen

3

1

Department of Computer Science and Information Engineering, National Taichung University of Science and Technology, No. 129, Section 3, Sanmin Road, Taichung, Taiwan

2

Department of Computer and Information Science, National Chiao Tung University, No. 1001 University Road, Hsinchu 300, Taiwan

3

Department of Electrical Engineering, National Chung Hsing University, No. 250 Kuo Kuang Road, Taichung, Taiwan E-mail: linch@ntit.edu.tw

Abstract: In this study, adaptive local binary patterns (ALBP) are proposed for image retrieval and classification. ALBP are based on texture features for local binary patterns. The texture features were used to propose an adaptive local binary patterns histogram (ALBPH) and gradient for adaptive local binary patterns (GALBP) in this study. Two texture features are most useful for describing the relationship in a local neighbourhood. ALBPH shows the texture distribution of an image by identifying and employing the difference between the centre pixel and the neighbourhood pixel values. In the GALBP, the gradient for each pixel is computed and the sum of the gradient of the ALBP number is adopted as an image feature. In this study, a set of colour and greyscale images were used to generate a variety of image subsets. Then, image retrieval and classification experiments were carried out for analysis and comparison with other methods. From the experimental results, the authors discovered that the proposed feature extraction method can effectively describe the characteristics of images in regard to texture image and image type. The image retrieval and classification experiments also produced better results than other methods.

1 Introduction

In recent years, with the emergence of the internet, multimedia data can be more easily shared. These include digital images of textures, natural images, images of animals and plants, digital signs, fingerprint images, facial images, digital maps, medical images, art images and others. Large amounts of digital content have been created, stored and disseminated on account of the rapid expansion in image creation, storage and management technologies. The best method for effectively and efficiently retrieving desirable images from a constantly growing image database has thus become an important issue.

Texture features in an image play an important role in computer vision and image processing. Image retrieval and classification based on texture features are the active research topics in the field of computer vision and pattern recognition [1]. This paper focuses on the building of an efficient and accurate texture image retrieval and classification system. Many texture feature-based image retrieval systems have been proposed in the academic arena

[2 – 8]. The texture classification methods have been the focus of many studies[9 – 16].

Huang and Dai [2] proposed a texture-based image retrieval system integrated with both wavelet decomposition and a gradient vector. The system of Jhanwar et al. [3] is based on a motif co-occurrence matrix which converts the differences among pixels into a basic graphic and computes

the probability of its occurrence in the adjacent area as a texture feature of an image. Hafiane and Zavidovique [6]

focused on a novel description of coloured textures using local relational string (LRS) based on the relative relations between neighbouring pixels and their distribution. Lin et al.’s [8] approach is based on a colour co-occurrence matrix and uses the difference between pixels in scan patterns for the colour and texture image retrieval.

For texture image classification, Deng and Clausi [11]

developed an anisotropic circular Gaussian MRF model for retrieving rotation-invariant texture features. Varma and Zisserman [12] investigated texture classification using single images obtained from an unknown viewpoint and illumination. Bianconi et al. [14] proposed a system of coordinated clusters representation (CCR) based on the probability of occurrence in elementary binary patterns (texels) defined over a square window. The CCR was originally proposed for binary textures, but was later extended to greyscale texture images through global image thresholding.

Ojala et al. [17] proposed the concept of a local-binary-pattern (LBP) operator. The LBP operator primarily describes the texture in images and provides a theoretically simple and multi-resolution statistical method. Many studies also discuss an LBP operator [17 – 27]. An LBP operator is an effective way to describe image texture features. More recently, an LBP operator has been used in other applications such as classification [16 – 22], facial

(2)

expression recognition[23 – 25], fingerprint recognition [26]

and shape localisation[27].

Guo et al.[16]completed a modelling of the LBP operator, and an associated complete LBP (CLBP) scheme was developed for texture classification. Ojala et al. [19]

presented a theoretically simple, yet efficient, multi-resolution approach to greyscale and rotation-invariant texture classification using local binary patterns and non-parametric discrimination of sample and prototype distributions. Zhou et al. [20] extended the LBP operator using ‘uniform’ patterns, although it still possesses some shortcomings: it discards important texture information and is sensitive to noise. Liao et al. [21] proposed features robust to image rotation: less sensitivity to histogram equalisation and noise. Guo et al. [22] proposed an alternative hybrid scheme: a global rotation invariant that matches local variant LBP texture features.

The LBP operator evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. However, the LBP(riu2, P, R) operator does not discern the complexity of textures between the centre pixel and the neighbourhood pixels, or consider the difference in magnitude between a pixel and its neighbouring pixel, or the distribution ratio relationship of the difference between them. Thus LBP is not suitable when the adjacent grey values are very close to each other. This paper uses the difference between the centre pixel and the neighbourhood pixel values from every pattern number as an accurate way to describe the texture feature of an image; however, as most images are composed mainly of smooth region, these smooth regions may have some pixels with small differences in greyscale intensity.

Adaptive local binary patterns histogram (ALBPH) and gradient for adaptive local binary patterns (GALBP) can effectively describe the various properties of an image. To enhance the retrieval performance, ALBPH and GALBP are integrated to develop an image retrieval and classification procedure based on texture distribution. The integration of multiple features may certainly increase the retrieval and classification performance.

Related research will be introduced in the next section. Section 2.1 briefly reviews greyscale and rotation invariant local binary patterns (GSRILBP). Section 2.2 presents the smooth and unsmooth region. ALBPH features will be introduced in Section 2.3. GALBP features will be introduced in Section 2.4. Section 3 presents the image retrieval system. In Section 4, we propose the image classification system for two-class support vector machines (SVMs). The experiments and comparisons with other approaches are presented in Section 5. Conclusions will be offered in Section 6.

2 Proper feature

2.1 Grey scale and rotation invariant local binary patterns[19]

Ojala et al. [19]proposed the GSRILBP drawing from the texture T (x, y) of pixels at coordinates (x, y) in a local neighbourhood as the joint distribution. This is shown in

Fig. 1of the grey levels in total node P (P . 1)

T (x, y)= t[fc(x, y), f0(x, y), . . . , fP−1(x, y)] (1) where fc(x, y) of the centre pixel at coordinates (x, y) of a local

neighbourhood is the grey value, fi(x, y) is the grey value

of ith node on a circle of radius R(R . 0 and R [ Z ) that forms a circularly symmetric set of coordinates (x, y) of a local neighbourhood, the coordinate of fi(x, y)

is (x 2 Rcos((2pi)/(P)), y+ Rsin((2pi)/(P))), i ¼ 0, 1, . . . , P 2 1 and P is a multiple of 4 (P ¼ 4, 8,12,. . .).

LBP for greyscale and rotation invariant [19]have P+ 2 patterns, where two classes of these patterns are uniform and non-uniform patterns, respectively. The pattern numbers of a uniform pattern are given the number P+ 1. The non-uniform pattern is classified as pattern number 1. Therefore GSRILBP defines the uniformity measure U(LBP(P, R)) as follows

U (LBP(P, R))= |s(fP−1− fc)− s(f0− fc)| +P−1

i=1

|s(fi− fc)− s(fi−1− fc)| (2) where s( fi2 fc) is the difference function of the pixel

value difference between fi and fc. The difference function

s( fi2 fc) is 1, if s( fi2 fc)≥ 0 and s( fi2 fc) is 0, if

s( fi2 fc) , 0, i ¼ 0, 1, . . . , P 2 1. The pixel is a uniform

pattern if the uniformity measure U(LBP(P, R)) is less than the threshold value U at coordinates(x, y); otherwise, the pixel is a non-uniform pattern. Therefore the following GSRILBP are LBP(riu2, P, R)= P−1 i=0 s(fi− fc), if U (LBP(P, R))≤ U P+ 1, otherwise  (3) where riu2 is the use of rotation invariant uniform pattern that has a U value of 2. The pattern is a uniform pattern if U (LBP(P, R))≤ U and the pattern numbers are given a number from 0 to P. The pattern number of a non-uniform with the number P+ 1 is considered; thus, the image feature calculates the mean of P+ 2th pattern number in each image.

2.2 Smooth and unsmooth region

The edge detection technique is applied to smooth and unsmooth regions in this paper. RGB to YCbCr conversion is the most commonly used colour coordinate system for image processing. Y is the luminance component and Cb

and Cr are the chrominance components. The luminance

(3)

edge is determined using Y information to perform the edge detection method described above. RGB of the pixel (x, y) was transformed into the YCbCr domain using the following formula Y Cb Cr ⎡ ⎣ ⎤ ⎦ = −0.168 −0.3310.209 0.587 0.1140.5 0.5 −0.418 −0.081 ⎡ ⎣ ⎤ ⎦ × RG B ⎡ ⎣ ⎤ ⎦ + 1280 128 ⎡ ⎣ ⎤ ⎦ (4)

Sobel edge detection[28]was employed to detect the gradient feature. The gradient magnitude∇Y and direction u(x, y) for Y (x, y) of pixel at coordinates (x, y) are given by

∇Y = g(x, y) = g2 x(x, y)+ g2y(x, y) and u(x, y)= tan−1 gy(x, y) gx(x, y) (5)

In most cases, the edge variation can be expressed as horizontal direction gx and vertical direction gy. A 3× 3

block was used to compute the variations in the horizontal and vertical directions gxand gy.

After the gradients g of the entire image were computed, the ratio of each gradient to the total gradient was estimated, as shown inFig. 2. Let ridenote the percentage

of the ith gradient and be expressed as follows

ri= i j=0 pr(gj)= i j=0 ng j N (6)

where N is the total pixel number in an image, pr( gj) is the

probability of occurrence of the jth gradient gj, and ngjis the

number of pixels that have a gradient level gj.

It is suggested that ri¼ 10% and the gradient threshold is

dT¼ gi. If a pixel’s gradient is g , dT, it is seen in the

smooth region; otherwise, it is seen in the unsmooth region.

2.3 Adaptive local binary patterns histogram Determining the difference between the values of the centre pixel fc and the neighbourhood pixel fi from each pattern

number provides an accurate way to describe the texture feature of an image. However, most images are mainly composed of smooth regions. Although these smooth regions may have some pixels with small differences in greyscale intensity, they are classified according to pattern number 1 – P+ 2. As shown in Fig. 3, the pattern number LBP(riu2, 8, 1) in both Figs. 3a and b is 4, but the differences between the fi and fc values are large. The

difference values express the texture descriptions of the image. The smooth region shows the smooth area in an image, and that the difference is small. The unsmooth region shows the coarse area in an image, and the difference is large. For that reason, we defined and computed the adaptive local binary patterns (ALBP) in each pixel of an image. Therefore ALBP defined the useful patterns of LBP according to the smooth and unsmooth regions in this paper.

To better describe the image features, the pattern number of LBP within a smooth region is denoted by P+ 2. The local binary patterns of an unsmooth region were given a pattern number from 0 to P+ 1. Therefore the following ALBP are (see (7))

where Dc is a gradient value of gc and DT is the gradient

threshold.

We applied the ALBPH of an image as one of the texture features. There were P+ 2 different pattern numbers for each image. Before defining and computing the pattern number histogram for ALBP features of an image, the pixels of all database images were categorised into P+ 2 pattern numbers for ALBP. The ALBP histogram shows the texture distribution of an image. The ALBPH feature of the kth pattern number can be defined as

ALBPHk(riu2, P, R)=Nk

N (8)

where N is the total pixel number and Nkis the pixel number

of the kth pattern number in each image; therefore P+ 2 ALBPH feature values can be obtained from a grey image. 2.4 Gradient for ALBP

In the previous section, the ALBP histogram was used as an image feature; however, not all pixels in a pattern number have the same characteristics. Thus, in this section, the gradient for each pixel was computed, and the sum of the gradient for the ALBP number (GALBP) adopted as an image feature.

GALBP calculates the sum of pixel gradients belonging to the P+ 2 pattern number. Later, the gradient was created on the basis of the P+ 2 pattern number and the sum of gradients. The probability of the sum of the ALBP number in the entire image was estimated to obtain a feature of GALBP. Let gi(x, y) be the gradient for each pixel at

coordinates (x, y) corresponding to the ith pattern number Fig. 2 Gradient cumulative probability graph

ALBP(riu2, P, R)= P−1 i=0 s(fi− fc) if U (LBPP,R)≤ 2 and Dc≥ DT P+ 1 if U (LBPP,R) . 2 and Dc≥ DT P+ 2 otherwise ⎧ ⎨ ⎩ (7)

(4)

on the image. The sum of gradients gi of the ith pattern

number, denoted as GALBPi, can be computed as follows

GALBPi= Ni

j=1

gi (9)

where Niis the total pixel number in the ith pattern number.

3 Image retrieval system

ALBPH and GALBP are useful for describing the relationship between grey levels and textures in an image. The two features are highly complementary and can be integrated to establish a grey difference and grey-gradient based on image retrieval and classification (an AGLBP feature).

The ALBPH (ALBPHd1, ALBPHd2, . . . , ALBPHdP+2) and (ALBPHq1, ALBPHq2, . . . , ALBPHqP+2) of the query image Q and database image D were obtained from (8). The image matching distance DALBPH between Q and D based on the ALBPH can be calculated using the following equation

DALBPH= P+2 k=1 ALBPHqk− ALBPHdk ALBPHqk+ ALBPHdk+ n       (10)

where the superscripts q and d stand for the query Q and database image D and n is any small number that avoids the denominator ¼ 0.

The GALBP (GALBPd1, GALBPd2, . . . , GALBPdP+2) and (GALBPq1, GALBPq2, . . . , GALBPqP+2) of images Q and D were obtained from (9) so the image matching distance DGALBP between Q and D based on GALBP can be formulated as DGALBP= P+2 k=1 GALBPqk− GALBPdk GALBPqk+ GALBPdk+ n       (11)

The proposed AGLBP retrieval system combines the ALBPH and GALBP to quantise the similarity between Q and D. We define the image matching distance DAGLBPbetween Q and D, denoting the AGLBP retrieval system, and determine the similarity of images as

DAGLBP= DALBPH+ DGALBP (12)

generally DAGLBP decreases with an increase in similarity between Q and D. Hence, the AGLBP retrieval system can deliver the image from the database with a minimal DAGLBP. The precision and recall measurements of Mehtre et al.[29]

are often used to describe the performance of an image retrieval system. The precision (P) and recall (R) are

defined as

P(k)= nk/L and R(k)= nk/N (13) where L is the total number of retrieved similar images in the database, nkis the number of relevant matches among all the k

retrievals and N is the number of all relevant images in the database.

4 Image classification system for two-class SVMs

SVM [30] are appropriate for examining the classification performance of various techniques. SVM performs well with classification problems. In this paper, the classifier that performs texture classification is drawn from the texture features of AGLBP. This method is defined as transforming two-class SVMs from multi-class classification problems mapping into several class problems; thereby, the two-class SVMs can be addressed directly using several SVMs. Drawing from the two-class SVMs, the image classification system was constructed using two-class SVM classifiers to classify textures with multiple classes.

The two-class SVMs are assumed to have two sets of training data set X ¼{xi, yi}, where xi is the ith feature

vector, yi is class information, xi[ Rd, yi[ + {1, 21},

and i ¼ 1, 2, . . . , n. From these data, the classification line for the two types can be determined as: f (x) ¼ wTx 2 b, where the point of yi¼ 21 is at f (x) , 0 and the point of

yi¼+1 is at; therefore we can use f (x) to determine the

data type.

First, we separated the data into training data and test data. It was supposed that there were N types of training materials; from the N types, any two types of data were selected to complete a two-class SVM classification to determine the optimal hyperplane; thereby, the total N(N 2 1)/2 set of classifiers were then available. The test data were classified using the classifiers obtained from the training data; each test datum for each classifier was considered a category. Therefore there were N(N 2 1)/2 categories for each test data. Then, the number in each category was summed up; the category with the highest numbers was taken as the category of the test data.

This classification system mainly used AGLBP characteristics to perform the two-class SVM classification. The system also used a MatLab SVM tool written by Chang et al. [31] for the calculation. Moreover, SVM used RBF as the kernel of the experiment and employed cross validation to determine the best parameters for C and g; the ranges of cross comparison were [225, 216] and [2215, 214].

5 Experimental results

The performance of the proposed method for image retrieval and classfication was evaluated using two image sets. Image Fig. 3 Pattern number of LBP (riu2, P, R) for 3× 3 grids

(5)

Set 1 is grey texture images from the Brodatz texture database, and Image Set 2 is the colour texture images from the Columbia-Utrecht (CUReT) database. Each image set included four types of texture variations: the original texture, histogram equalised textures, randomly rotated textures and histogram equalised textures.

Image Set 1 (as shown inFig. 4) contains 111 640× 640 pixels grey level different texture images. Each image is partitioned into 25(5× 5) non-overlapping sub-images of 128× 128 pixels. Subset 1 consists of sub-images downsampled to 64× 64 pixels using the average between four adjacent pixels. Subset 2 is a histogram equalisation from Subset 1. Subset 3 is the centre 64× 64 pixels after a random rotation of each sub-image. Subset 4 is a histogram equalisation from Subset 3. Subsets 3 and 4 were generated by 10 random rotations. Finally, we obtained the average retrieval precision, the classification accuracy and the standard deviation is generated by 10 random rotations.

Image Set 2 (as shown inFig. 5) contains 45 colour texture images selected from the CUReT database. Each colour image size is 320× 320 pixels, and was transformed into a grey image in this paper. Each class of texture image is

represented by one texture image. Each image was paritioned into 9(3× 3) non-overlapping sub-images of 106× 106 pixels. Subset 5 is defined as the centre 64 × 64 pixels from each sub-image without interpolation. Subset 6 is defined as the histogram equalisation from Subset 5. Subset 7 is defined as the centre 64× 64 pixels after a random rotation in each sub-image. The histogram equalisation from Subset 7 is defined as Subset 8. Subsets 7 and 8 were generated by ten random rotations. Similarly, we obtained the average retrieval precision, the classification accuracy and the standard deviation generated by ten random rotations.

This paper has adopted two texture features ALBPH and GALBP. To validate the effect of the method brought forward by this paper, the experimental results were compared with the results of the following image retrieval methods based on texture fetures: Ojala et al. [19], Hafiane and Zavidovique [6], Bianconi et al. [14], Huang and Dai

[2] and Jhanwar et al. [3]. It is given that r ¼ 1 and p ¼ 8 by Ojala et al. [19], d ¼ 3 by Hafiane and Zavidovique[6]

and Otsu’s threshold by Bianconi et al. [14] in the experiments.

Fig. 4 Some examples of Image Set 1

(6)

5.1 Performance of an image retrieval system In the first experiment from Image Set 1, the average precision of each retrieval image for the various images was calculated from Subsets 1 – 4. The average precision for each value of L is shown inFig. 6. The experiment used the number of the first 25 retrieved images L to compute the precision P for each query image, finally obtaining the average precision. The experimental results clearly reveal that for the first 25 returned images, the AGLBP retrieval system is

significantly superior to the methods of Ojala et al. [19], Hafiane and Zavidovique [6], Bianconi et al. [14], Huang and Dai [2] and Jhanwar et al. [3]. The average precision for the histogram equalisation of images is better than that of the original image. The average precision for Subsets 3 and 4 of the random rotation decreased faster than Subsets 1 and 2. At the same time, the average precision of the present method provides better accuracy than other methods. This paper compares the average precision of Subsets 1 – 4 for L ¼ 2 and L ¼ 25. The experimental results from our Fig. 6 Comparison of retrieval precision of Image Set 1

a Retrieval precision of Subset 1 b Retrieval precision of Subset 2 c Retrieval precision of Subset 3 d Retrieval precision of Subset 4

Table 1 Comparison of retrieval precision on Image Set 1 for L ¼ 2 and L ¼ 25

Subset L ¼ 2 L ¼ 25

P, % 1 2 3 4 1 2 3 4

Ojala et al.[19] 88.11 84.11 70.1 + 0.5 68.5 + 0.3 53.56 48.27 22.5 + 0.2 21.5 + 0.3 Hafiane and Zavidovique[6] 71.84 71.84 65.8 + 0.4 65.6 + 0.5 28.28 28.28 19.4 + 0.2 19.3 + 0.2 Bianconi et al[14] 85.64 86.47 73.9 + 0.4 68.3 + 0.5 48.65 48.68 30.4 + 0.3 24.0 + 0.1 Huang and Dai[2] 85.82 83.73 68.2 + 0.5 64 + 0.6 54.59 50.67 18.1 + 0.1 14 + 0.2 Jhanwar et al.[3] 73.24 70.11 56.1 + 0.3 55.9 + 0.3 36.26 31.57 9.9 + 0.1 9.8 + 0.2 Present method 94.38 91.06 80.8 + 0.5 77 + 0.4 62.34 56.53 33.4 + 0.3 29.3 + 0.2

(7)

method and the other five methods are shown inTable 1. Our method has achieved a better average precision for the various images than has the others.

Next, the performance was evaluated by the four different subsets from Image Set 2. The average precisions for the varying values of L are shown in Fig. 7. The experiment was carried out with the number L from the first 1st to the 9th retrieved images. These results clearly reveal that the AGLBP retrieval system is significantly superior to the methods of Ojala et al. [19], Hafiane et al. [6], Bianconi

et al. [14], Huang Dai [2] and Jhanwar et al. [3]. The average precision of the histogram equalisation of the images is less than that for the original image. The average precision for Subsets 7 and 8 decreased faster than the Subsets 5 and 6, because Subsets 7 and 8 are variations of rotation. At the same time, the average precision of the present method is more accurate than the other five methods.

Table 2 shows the average precision of the AGLBP retrieval system and the precisions of the compared methods on CURet for L. The experimental results show Fig. 7 Comparison of retrieval precision of Image Set 2

a Retrieval precision of Subset 5 b Retrieval precision of Subset 6 c Retrieval precision of Subset 7 d Retrieval precision of Subset 8

Table 2 Comparison of retrieval precision on Image Set 2 for L ¼ 2 and L ¼ 25

Subset L ¼ 2 L ¼ 25

P, % 5 6 7 8 5 6 7 8

Ojala et al.[19] 91.4 92.1 82 + 1.5 79.9 + 1 73.3 72.0 48.5 + 1.3 44.5 + 1 Hafiane and Zavidovique[6] 79.3 79.0 69.1 + 1.3 69.2 + 1.2 50.7 50.6 34.5 + 0.7 34.9 + 0.6 Bianconi et al.[14] 84.1 76.0 76.8 + 1.3 72.3 + 0.9 49.2 44.4 43 + 0.6 38.5 + 0.7 Huang and Dai[2] 90.2 76.4 76.4 + 0.9 63.9 + 1 68.7 48.0 37.6 + 0.8 24.9 + 0.6 Jhanwar et al.[3] 73.5 69.1 58.1 + 0.6 58.4 + 1 43.6 42.6 21.1 + 0.3 21 + 0.6 Present method 97.3 96.0 89.8 + 0.9 86.2 + 1.4 78.5 78.6 58.3 + 1.1 52.1 + 0.9

(8)

that: (i) the AGLBP retrieval system provides a much higher rate of accuracy than the other five methods and (ii) the average precision accuracy of the AGLBP retrieval system decreases as L increases.

5.2 Performance of texture image classification The performance of texture image classification was evaluated for 24 homogeneous texture images selected from Image Set 1, and then generated into Subsets 1 to 4 using the same variations which were defined before, as shown in

Fig. 8. The SVM classifier was trained by using 13 samples from each class, whereas the other 12 samples were used as testing data in this experiment.

As a result, with the exception of Hafiane’s and Bianconi’s methods, the classification accuracy value for the original textures image Subset 1 decreased; however, after the texture image’s random rotation (Subset 3) and the histogram equalisation (Subset 4), the increase and decrease of classification accuracy value showed inconsistancies. The results of the classification accuracy obtained from the proposed method was higher than the other results as

shown inTable 3. The classification accuracy subsequent to the original texture images (Subset 1) and the histogram equalisation (Subset 2) was as high as 99.65 and 100%, respectively. In particular, the classification rate of the other methods decreased after the random image rotation, whereas the classification accuracy of the method proposed here reached as high as 94.4% (Subset 3) and 95.7% (Subset 4). The standard deviation of classification rate was also smaller, indicating that the method proposed here has better stablity.

The performance of texture classification for Image Set 2 was evaluated in 45 homogeneous texture images selected from Subsets 5 – 8. The SVM classifier was trained by using five samples of each class, whereas the other four samples were used as testing data in this experiment.

The classification accuracy obtained in these experiments is shown in Table 4. From the results, we determined that the classification accuracy of our method was higher in the original texture image (Subset 5) and after the histogram equalisation (Subset 6); however, the classification accuracies of the compared methods descended (Bianconi’s method most). After the random texture image rotation (Subset 7) and the histogram equalisation (Subset 8), no matter what method was used, the classification accuracy descended. The method proposed here clearly has greater accurancy than the compared methods in every subset with the classification accuracy reaching as high as 86.8% (Subset 7) and 85.7% (Subset 8).

6 Conclusions

This paper has proposed a noval ALBP technique, which considers non-smooth and smooth texture image regions, and uses ALBP to generate new features: ALBPH and GALBP. The two features are highly complementary and can be integrated to establish a grey-difference and grey-gradient. These features are able to effectively describe the various properties of an image.

This study adopted image retrieval and classfication for its experiments. The image database used in the experiments contained two image sets (each Image Set selected four types of individual subsets); the results were compared using five scholarly methods. The experiment results revealed that the average precision of our method is more accurate than those of the other methods. For image classification, this study used a supporting vector machine Fig. 8 Twenty four homogeneous texture images from Image Set 1

Table 3 Comparison of classification accuracy for Image Set 1 Image subset Classification accuracy, %

1 2 3 4 Ojala et al.[19] 83.68 85.42 77.9 + 2.8 72.4 + 2.4 Hafiane and Zavidovique[6] 93.40 93.06 72 + 2.3 73.8 + 2.1 Bianconi et al.[14] 97.92 96.53 86 + 1.6 80.8 + 2.7 Present method 99.65 100 94.4 + 1.5 95.7 + 1.5

Table 4 Comparison of classification accuracy for Image Set 2 Image subset Classification accuracy, %

5 6 7 8 Ojala et al.[19] 92.78 91.67 79.2 + 2.9 73.8 + 2.5 Hafiane and Zavidovique[6] 88.89 88.82 62.4 + 2.8 61.6 + 2.5 Bianconi et al.[14] 67.22 62.78 62.32 + 1.9 61.4 + 3.6 Present method 97.78 98.33 86.8 + 2.1 85.7 + 3.2

(9)

as the classifier, and the results were compared with the results gained using other scholars’ methods. Regardless of which subset was used, the results were better with our system compared with the other methods. Moreover, the stability of both retrieval and classification was also higher.

7 References

1 Tuceryan, M., Jain, A.K.: ‘Texture analysis’, in Chen, C.H., Pau, L.F., Wang, P.S.P. (Eds.): ‘The handbook of pattern recognition and computer vision’ (World Scientific, Singapore, 1998, 2nd edn.), pp. 207 – 248

2 Huang, P.W., Dai, S.K.: ‘Image retrieval by texture similarity’, Pattern Recognit., 2003, 36, (3), pp. 665 – 679

3 Jhanwar, N., Chaudhurib, S., Seetharamanc, G., Zavidovique, B.: ‘Content based image retrieval using motif co-occurrence matrix’, Image Vis. Comput., 2004, 22, (14), pp. 1211 – 1220

4 Liapis, S., Tziritas, G.: ‘Color and texture image retrieval using chromaticity histograms and wavelet frames’, IEEE Trans. Multimed., 2004, 6, (5), pp. 676 – 686

5 Moghaddam, H.A., Khajoie, T.T., Rouhi, A.H., Tarzjan, M.S.: ‘Wavelet correlogram: a new approach for image indexing and retrieval’, Pattern Recognit., 2005, 38, (12), pp. 2506– 2518

6 Hafiane, A., Zavidovique, B.: ‘Local relational string and mutual matching for image retrieval’, Inf. Process. Manag., 2008, 44, (3), pp. 1201–1213 7 Liu, G.H., Yang, J.Y.: ‘Image retrieval based on the texton

co-occurrence matrix’, Pattern Recognit., 2008, 41, (12), pp. 3521–3527 8 Lin, C.H., Chen, R.T., Chan, Y.K.: ‘A smart content-based image retrieval system based on color and texture feature’, Image Vis. Comput., 2009, 27, (6), pp. 658 – 665

9 Haralik, R.M., Shanmugam, K., Dinstein, I.: ‘Texture features for image classification’, IEEE Trans. Syst., Man Cybern., 1973, 3, (6), pp. 610–621 10 Randen, T., Husy, J.H.: ‘Filtering for texture classification: a comparative study’, IEEE Trans. Pattern Anal. Mach. Intell., 1999, 21, (4), pp. 291 – 310

11 Deng, H., Clausi, D.A.: ‘Gaussian VZ-MRF rotation-invariant features for image classification’, IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, (7), pp. 951 – 955

12 Varma, M., Zisserman, A.: ‘A statistical approach to texture classification from single images’, Int. J. Comput. Vis., 2005, 62, (1–2), pp. 61–81 13 Varma, M., Garg, R.: ‘Locally invariant fractal features for statistical

texture classification’. Proc. IEEE 11th Int. Conf. on Computer Vision, October 2007, pp. 1 – 8

14 Bianconi, F., Ferna´ndez, A., Gonza´lez, E., Caride, D., Calvin˜o, A.: ‘Rotation-invariant colour texture classification through multilayer CCR’, Pattern Recognit. Lett., 2009, 30, (8), pp. 765 – 773

15 Varma, M., Zisserrman, A.: ‘A statistical approach to material classification using image patch examplars’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (11), pp. 2032– 2047

16 Guoa, Z., Zhanga, L., Zhang, D.: ‘A completed modeling of local binary pattern operator for texture classification’, IEEE Trans. Image Process., 2010, 19, (6), pp. 1657– 1663

17 Ojala, T., Pietikaine, M., Harwood, D.: ‘A comparative study of texture measures with classification based on feature distribution’, Pattern Recognit., 1996, 29, (1), pp. 51 – 59

18 PietikaKinen, M., Ojala, T., Xu, Z.: ‘Rotation-invariant texture classification using feature distribution’, Pattern Recognit., 2000, 33, pp. 43 – 52

19 Ojala, T., PietikaKinen, M., Maenpaa, T.: ‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (7), pp. 971 – 987 20 Zhou, H., Wang, R., Wang, C.: ‘A novel extended local-binary-pattern

operator for texture analysis’, Inf. Sci., 2008, 12, (22), pp. 4314 – 4325 21 Liao, S., Law, M.W.K., Chung, A.C.S.: ‘Dominant local binary patterns for texture classification’, IEEE Trans. Image Process., 2009, 18, (5), pp. 1107– 1118

22 Guoa, Z., Zhanga, L., Zhang, D.: ‘Rotation invariant texture classification using LBP variance (LBPV) with global matching’, Pattern Recognit., 2010, 43, (3), pp. 406 – 419

23 Ahonen, T., Hadid, A., PietikaKinen, M.: ‘Face description with local binary patterns: application to face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (12), pp. 2037– 2041

24 Zhao, G., Wang, R., Pietikainen, M.: ‘Dynamic texture recognition using local binary patterns with an application to facial expressions’, IEEE Trans. Pattern Anal. Mach. Intell., 2007, 29, (6), pp. 915 – 928 25 Shan, C., Gong, S., McOwan, P.W.: ‘Facial expression recognition

based on local binary patterns: a comprehensive study’, Image Vis. Comput., 2009, 27, (6), pp. 803 – 816

26 Nanni, L., Lumini, A.: ‘Local binary patterns for a hybrid fingerprint matcher’, Pattern Recognit., 2008, 41, (11), pp. 3461 – 3466

27 Huang, X., Li, S.Z., Wang, Y.: ‘Shape localization based on statistical method using extended local binary pattern’. Proc. Third Int. Conf. on Image and Graphics, December 2004, pp. 184 – 187

28 Gonzalez, R.C., Woods, R.E.: ‘Digital Image Processing’ (Prentice-Hall, 2002, 2nd edn.)

29 Mehtre, B.M., Kankanhalli, M., Lee, W.F.: ‘Shape measures for content-based image retrieval: a comparison’, Inf. Process. Manag. Int. J., 1997, 33, (3), pp. 319 – 337

30 Cortes, C., Vapnik, V.: ‘Support vector networks’, Mach. Learn., 1995, 20, (3), pp. 273 – 297

31 Chang, C.C., Lin, C.J.: ‘LIBSVM: a library for support vector machines’, 2001. Software available at http://www.csie.ntu.edu.tw/ ~cjlin/libsvm

數據

Fig. 1 of the grey levels in total node P (P . 1)
Fig. 5 Some examples of Image Set 2
Table 1 Comparison of retrieval precision on Image Set 1 for L ¼ 2 and L ¼ 25
Table 2 Comparison of retrieval precision on Image Set 2 for L ¼ 2 and L ¼ 25
+2

參考文獻

相關文件

S15 Expectation value of the total spin-squared operator h ˆ S 2 i for the ground state of cationic n-PP as a function of the chain length, calculated using KS-DFT with various

Light rays start from pixels B(s, t) in the background image, interact with the foreground object and finally reach pixel C(x, y) in the recorded image plane. The goal of environment

A Boolean function described by an algebraic expression consists of binary variables, the constant 0 and 1, and the logic operation symbols.. For a given value of the binary

• elearning pilot scheme (Four True Light Schools): WIFI construction, iPad procurement, elearning school visit and teacher training, English starts the elearning lesson.. 2012 •

 name common laboratory apparatus (e.g., beaker, test tube, test-tube rack, glass rod, dropper, spatula, measuring cylinder, Bunsen burner, tripod, wire gauze and heat-proof

 Retrieval performance of different texture features according to the number of relevant images retrieved at various scopes using Corel Photo galleries. # of top

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in

– S+U can also preserve annotations of synthetic images – Refined images really help improving the testing result – Generate > 1 images for each synthetic