• 沒有找到結果。

Texture Classification Mechanism

4. Texture Discrimination based on GA-CNN proliferation structure

4.6 Texture Classification Mechanism

So far it is obvious that we have put forth a new structure in generating CNN templates in a more flexible fashion. This structure also deals with the classification issues in supervised and unsupervised conditions for texture patterns. Here we addressed on how our texture classification mechanism can work for this function.

Our entire classification mechanism emphasizes on the decision engine and the classification algorithm besides the searching processes for CNN templates based on GA. As indicated formerly, CNN template set ensures the existence of feature maps for various texture patterns. TBS gives an evaluation standard for determining whether the classified outputs are desired ones and whether TBS series can describe the difference between texture patterns. Therefore, for the decision engine, TBS difference between textures decides whether or not the CNN template has to be proliferated. If TBS difference is greater than some threshold, this TBS series is distinct enough to represent different textures and the classification process is terminated. Otherwise, we must generate more CNN templates to organize a new TBS series for differentiating textures. Also, whether the number of clusters is a priori is taken into consideration for different processing purposes. If the number of clusters is known, the classification process would be much easier by applying the pre-classifier and decision engine. Since the features (TBS series) extracted from CNN outputs are quite representative for various texture patterns, a complex or time-consuming classifier is no longer required for pre-classification mechanism. Instead, a simple classifier would work on the classification for features extracted from textures. If the number of clusters is not given, we would use the classification algorithm in the first place to find the number of clusters that will be brought in later. There is no need to apply any existed unsupervised classifier that might be difficult and inefficient for

texture classification. To sum up, we have the classification algorithm on the foundation of TBS series as shown below.

Classification Algorithm

1. Extract the dominant features from the TBS series.

2. Set niteration=1 whereniteration is the number of iterations, and calculate the average of all texture patterns.

i.e. Calculate for i m tuple in the ith dimension, m is the total dimension of each tuple, and n is the total number of tuples (texture patterns).

3. Calculate the distance from each tuple to the centre tuple.

i.e. dcj = (tijtci),for1≤ jn where dcj is defined as the 2-norm distance from the tuple j to the centre tuple in the sum of every dimension i in tuple j.

4. Find the maximum distance dmax such that max{ } difference ratio between texture patterns and 0≤η≤1. Let k be the total number of tuples whose j satisfies the above constraint.

7. Make the centers of new clusters be tj where 1≤ jk. Now the number of clusters will be k+1 (i.e.nclusterr =k+1 ,andr =niteration) since tc should also be one of the centers of clusters.

8. Assign the remained unclassified members tj where1≤ jnnclusterr +1 to the cluster whose dni is the minimum among all i (1≤inclusterr ).

9. Recalculate the centers of new clusters to form cj where1≤ jnclusterr . 10. Evaluate the difference ratio between clustersrij.

where / max{ } 1 1 1 1

Stop the algorithm andncluster =nclusterr wherer =niteration. Else

cluster n wherer n

n = 1 −1 = .

Like the first step illustrated in this algorithm, extracting the dominant features from TBS series is trying to simplify the analysis of feature curves and enable the

convergence of classification. These dominant features used in this chapter are average height, width, peak value, and the position of the peak value calculated from the feature curve TBS. The classification algorithm helps to find the number of clusters in an efficient way from how clearly we expect the textures to be distinguished. Naturally, the classification mechanism would go back to the proliferating system if the current TBS series are still insufficient to distinguish texture patterns. This then becomes precisely the case where the number of clusters has been given. And the processing procedures would be more lucid after the number of clusters is a priori. We will look into the outcomes for different given conditions in the following section to show how both cases demonstrate satisfactory results.

4.7 Experimental Results

This section mainly focuses on the experimental results of classification outcomes. Besides, we would also show the performance of our feature curves presented previously to demonstrate how they may represent various texture patterns in practical use. All the texture patterns we use in this chapter are obtained from Brodatz texture database, and we have sixteen texture patterns for the following experiments. For both experimental and optimized parts, the standard size of texture patterns in this chapter is 64x64 pixels. First, the capability of the feature curve, TBS, in one direction for four texture patterns cropped in different orientations and regularity are shown in Fig. 4.7_1 (a)-(c). Fig. 4.7_1 (a) represents the original four texture patterns in different orientations and regularity with each row representing the same texture patterns we have collected in five different orientations and regularity.

Fig. 4.7_1 (b) and (c) indicate the feature maps obtained from the operation of the first template in CNN’s and the corresponding feature curves scanning in the horizontal direction. Fig. 4.7_1 (c) figures out the different distributions among these

four texture patterns. Using it as an example in these four textures, the feature curves obtained from the given CNN templates indeed describe the discriminability between different texture patterns.

In a more difficult case, the original CNN template is not sufficient to discriminate those eight texture patterns interpreted in Fig. 4.7_2 (a)-(c). Fig. 4.7_2 (a) shows the original eight texture patterns, and Fig. 4.7_2 (b) and (c) both present the same thing as in Fig. 4.7_1 excluding the additional texture patterns. As what our approach highlights, a new CNN template has to be optimized then in order to differentiate the current texture patterns (Fig. 4.7_2 (a)) by the combined TBS series extracted from this new optimized CNN template. With TBS difference ratio, we can foresee whether the texture patterns will lead to the similar or different distribution of TBS. In this way, we need to train one more CNN template for distinct feature maps as shown in Fig. 4.7_3 (a), whereas Fig. 4.7_3 (b)-(c) show the different distribution of TBS in this case. Fig. 4.7_3 (b) and (c) indicate the diverse TBS in the horizontal and vertical directions, which causes the differentiation in some other texture patterns.

More specifically, from the 4th row (texture 4) to the 8th row (texture 8) in Fig.4.7_2 (a), (texture 4 and 5) and (texture 7 and 8) in particular, the original feature maps (Fig.

4.7_2 (b)) and TBS (Fig. 4.7_2 (c)) which can not represent or differentiate these textures will be representative enough to discriminate these patterns by incorporating this new TBS (Fig. 4.7_3 (b) and (c)) to form a complete TBS series. Therefore, the experimental results (Fig. 4.7_2 and 4.7_3) show that the proliferated CNN templates accompanied by the original ones give a better representative discriminability for more texture patterns. In order to elucidate the functions of TBS series for texture discrimination, we set up a verifying experiment for numerical comparisons based on these figures (Fig. 4.7_2 and 4.7_3) in Table 4.7_1. The numerical value on each line represents the discrimination index of corresponding textures in Table 4.7_1 (a). The

discrimination index denoted by η(i, j) implies the discriminative ability between texture i and texture j (the row number of Fig 4.7_2 or 4.7_3). We only show the smallest top 5 discrimination indices for each figure from left to right. We can observe that all the numbers in Table 4.7_1 (a) are smaller than 0.5, which speaks of the difficulty in separating these textures. Therefore, we demonstrate the functions of TBS series (a combination of more features extracted from more CNN templates) in Table 4.7_1 (b). We would only show the textures which result in the lowest five discrimination indices for comparison. As Table 4.7_1 (b) indicates, the combined TBS series have enhanced the discriminative ability among textures since all the discrimination indices in Table 4.7_1 (b) are higher than those in Table 4.7_1 (a). The quantitative experiments indeed prove that features from the proliferated templates can make texture patterns more discriminative.

By the same token of satisfactory experimental results, we also demonstrate how effective and prevailing our defined CNN template can be. Our structure is by far better than some other traditional method that takes it as nothing more than a normal template that has nineteen parameters to be trained. Fig. 4.7_4 reveals the convergence condition by the fitness functions of GA’s from the best, average, and poorest populations after the same generations for the general CNN template (Fig.

4.7_4 (a)) and our predefined CNN template (Fig. 4.7_4 (b)). Unlike the generally-adopted CNN templates based on GA that often encounter difficulty in the convergence of obtaining a proper template, our results show a much faster convergence speed in the predefined case. In addition, our experiments show that at least three optimized templates would be sufficient enough to classify up to sixteen kinds of texture patterns by using TBS.

To classify less than eight textures in obviously distinct orientations, one out of two templates that is proliferated would be enough to differentiate different texture

patterns. As Table 4.7_2 shows, our introduced feature curve, TBS, does provide a better classification outcome than other features based on CNN’s in the previous studies will do. Table 4.7_3 and 4.7_4 indicate respectively the classification results for different rotations and sampling sizes of textures. This suggests the stability and invariance of our approach when applied to different situations. Also, Table 4.7_5 compares the experimental results between different feature curves, and there seems to be no difference even if only one orientation of TBS has been used. Since GA is a training process with optimization, we need to have some discussions about texture data sets on training/testing separation. We use the tenfold cross-validation testing model to make our experimental results more persuasive. Hence Table 4.7_6 indicates the individual and average classification error for a fixed textured data set of eight texture patterns in the standard size (64x64). We can see that the average classification error based on our introduced feature series is much lower than that of other traditional features based on CNN’s. Eventually, if the number of clusters is not given, we have the classification results in Table 4.7_7. Table 4.7_7 shows that the number of clusters should increase if the difference ratio between textures (η) decreases and vice versa. In this way, we can decide the number of clusters that we wish to classify the texture patterns in the database by adjusting η. As the classification outcome shows, the error percentage of our introduced system structure is still acceptable even if the number of clusters is not known beforehand. In Table 4.7_7, the error percentage of the fixed η is not given for unsupervised case since it is meaningless to calculate the classification error percentage on the different bases of the real situation. All the experiments indicate that our developed texture classification mechanism with the introduced TBS needs fewer templates for optimization, which for sure result in time-efficiency in the optimizing process of GA and the satisfactory results in texture discrimination as well.

(a)

(b) (c)

Fig. 4.7_1 Texture representation for four texture case (a) the original texture patterns (b) feature maps (c) TBS.

(a)

(b) (c)

Fig. 4.7_2 Texture representation for eight texture case (a) the original texture patterns (b) feature maps (c) TBS.

(a)

(b) (c)

Fig. 4.7_3 Texture representation for eight texture case based on another CNN template illustrated by (a) feature maps (b) TBS in the horizontal direction (c) TBS in

the vertical direction.

(a) (b)

Fig. 4.7_4 The convergence condition by GA for (a) The general CNN template (19 optimized parameters) (b) Our predefined CNN template

Table 4.7_1 Numerical Comparison for texture discrimination ability based on Fig.4.7_2 and 4.7_3

a. For individual feature curve (by only one CNN template)

Discrimination Index

Subjects

1 2 3 4 5

Fig.4.7_2 η(4,5)=0.12 Η(4,6)=0.15 η(5,6)=0.17 η(7,8)=0.22 η(3,7)=0.27

Fig.4.7_3 (b) η(1,2)=0.13 Η(3,4)=0.14 η(2,8)=0.16 η(2,7)=0.21 η(7,8)=0.27

Fig.4.7_3 (c) η(2,3)=0.10 Η(2,4)=0.14 η(1,3)=0.18 η(4,7)=0.19 η(2,8)=0.22

b. Comparison of discrimination index by one feature curve and combined feature curves (TBS series)

Discrimination Index

Subjects η(2,3) η(4,5) η(1,2) η(2,4) η(3,4)

One CNN template 0.10 0.12 0.13 0.14 0.14

TBS series 0.51 0.58 0.60 0.55 0.52

Table 4.7_2 Comparison in the classification outcome based on various features

Table 4.7_3 Experimental results for different rotations of texture patterns

Classification error for different rotations of texture patterns Number of

textures Original (0) 90 degree 180 degree 270 degree

4 textures 0.4% 0.7% 0.5% 0.6%

8 textures 3.6% 3.8% 3.5% 3.7%

12 textures 4.8% 5.4% 5.1% 5.5%

16 textures 5.8% 6.0% 6.2% 5.9%

Table 4.7_4 Experimental results for different sizes of texture patterns

Classification error for different sizes of texture patterns Number of

textures 64x64 32x32 16x16 100x100

4 textures 0.4% 1.2% 3.5% 0.3%

8 textures 3.6% 3.8% 4.3% 3.3%

12 textures 4.8% 5.2% 6.5% 4.7%

16 textures 5.8% 6.2% 7.2% 5.8%

Table 4.7_5 Experimental results for different TBS (vertical and horizontal)

Classification error for different TBS Number of

textures Horizontal only Vertical only

4 textures 0.5% 0.5%

8 textures 4.2% 4.7%

12 textures 5.2% 5.8%

16 textures 6.5% 6.7%

Number of templates Classification error Parameters Output 4

textures

Table 4.7_6 Experimental results for tenfold cross-validation testing model

Features Average Classification error

TBS 13.5%

Rob 17.8%

TE 25.2%

Table 4.7_7 Experimental results for the case when the number of clusters is not known

Number of textures Clusters Η Error (%) Clusters η Error (%)

4 textures 3 0.5 NA 4 0.60 2

8 textures 10 0.5 NA 8 0.55 4

12 textures 8 0.5 NA 12 0.45 5

16 textures 20 0.5 NA 16 0.36 8

4.8 Concluding Remarks

In this chapter, we have brought up a brand-new methodology, where a GA-CNN proliferating system is presented to deal with texture patterns in different given conditions. Here CNN’s inevitably take up a crucial part in signal analysis from a perspective of various templates through many kinds of texture patterns. Also, GA is used to optimize our defined templates of CNN’s which might project the texture patterns in different feature maps. Most significantly, the feature curve (TBS) which has been shown valid in the representation of texture patterns is introduced to classify textures in the most efficient way by determining if one more CNN template should be proliferated. TBS series could help to determine the number of clusters if it is not given in advance so that the classification process would be much simpler and more flexible. Furthermore, TBS could be applied to any binary images in the distinct feature maps if there is some other analytic tool to generate such feature maps like

CNN’s. Finally, the experimental results corresponding to the newly-introduced approach in this chapter are proven superior to those of other methods. It is therefore surely foreseeable that our structure will be of great help in applications of higher order image processing in the future.