• 沒有找到結果。

The Design Rules on GA by TBS

4. Texture Discrimination based on GA-CNN proliferation structure

4.5 Design of Characteristic Templates Optimized by Genetic Algorithms

4.5.1 The Design Rules on GA by TBS

A genetic algorithm (GA) is a kind of methodology, which can accomplish the specific task or processing by a series of rigorously mathematical operations and logical decisions. With the ability of selection, crossover, mutation, and reproduction in genes, the superior offspring will survive while the inferior one being excluded through competition. When GA is used as the training tool for CNN templates, all the elements in the templates of CNN’s have to be verified. As far as texture analysis is concerned, the CNN templates which possess the ability to simulate or differentiate texture patterns belong to a special kind of filter for extracting features in the original textures. Therefore, we would manage to modulate this specific form of CNN templates in stead of looking for a new CNN template just between any two specific texture patterns. The reason is simple: the latter gives rise to the decrease of the trained CNN templates and truly reflects the non-overlapped feature mappings from those templates. In fact, there exist some types of templates in generating distinct feature mappings among different textures for enhancing the differentiation functions of the classifier. Thus, the goal here turns to predefine this special form of templates to differentiate texture patterns. This kind of CNN templates could be determined and optimized by the adjustment of TBS difference when the following criterion is satisfied. CNN outputs for any two different textures, texture i and texture j. The subscript k represents the kth location in the calculated TBS. The ηk for the kth index represents the threshold of difference ratio between texture patterns, whose value, between 0 and 1, can be selected beforehand for various sets of texture patterns. Also ηk approaching 1 implies a higher difference ratio between texture i and texture j. We set

the same ηk for all indices for convenience and a bigger ηk for higher discriminative power between textures. For this requirement, we choose the error function in GA as follows.

where l is the training template including A, B, and I. n is the number of cells that depends on the size of templates; y is the desired output for the target texture ki

pattern and y is the steady state output of the discriminative texture pattern for the kj kth cell. Here the definition of (20) just expresses the relationship between the current optimizing templates, and the error function should be related to our TBS difference.

So the error function in GA was defined as the reciprocal of TBS difference. Finding a lower difference between y and id y is equivalent to finding a higher TBS s difference. As what we intend to do in this chapter, the fitness function given above is set to be the difference of the defined feature curve, TBS, between different texture patterns. Besides, any template optimized by GA must be stable because the lower fitness values always result from unstable trajectories in CNN. It ensures the stability in CNN if a higher fitness value is reached, which is exactly why GA is used for.

Since these two functions in this issue would be running after the same goal, we do not need to define an additional error function or to project it on the fitness function by any transforming function, like most approaches would do. The other related adjustments about GA on the search of CNN templates such as the encoding process, the way of adjusting reproduction and selection, crossover and mutation rate, etc, can also be referred to the previous chapter for more details.

In order to optimize CNN templates in pursuit of generating different feature maps for various texture sets more efficiently, we predefine three types of uncoupled

CNN template which take the distribution of different texture patterns into account with regard to its regularity, uniformity, and symmetry. Hence, we have

Type 1

All the three types of CNN templates are defined according to the arrangement of every cell coupling with its texture pattern, and only three conditions are considered due to the applicability in various texture patterns. For each training process of GA, only one of these three templates would be met to form the discriminative feature map for any pair of texture sets. Later we will show the faster convergence condition based upon our GA-defined template in the up-coming experimental results. The three types of CNN templates describe different ranges of feature space, and our featured curves could decrease the number of templates that have to be optimized. It is interesting to observe that CNN edge template in the library happens to be one of the defined templates since the former describes the texture pattern in the high frequency band. Thus it becomes the first template to represent textures before deciding whether or not the proliferation of CNN templates is necessary. Meanwhile, TBS helps provide an evaluation index in the optimizing

process of GA to determine the extent of discriminations between textures.

To acquire the feature maps based on different CNN’s, we have to optimize our defined templates. And the optimized results after GA search are shown in Fig.

4.5.1_1. For any two texture patterns (Fig. 4.5.1_2 (a)) which can not be classified correctly by the first template, a new CNN template will be optimized as stated in our approach. The evolution processes is demonstrated by the corresponding feature maps (Fig. 4.5.1_2 (b)) through GA-CNN and the corresponding fitness functions (Fig.

4.5.1_2 (c)) during GA optimization. The first and second row in Fig. 4.5.1_2 (b) and (c) correspond to the left and right texture pattern in Fig. 4.5.1_2 (a), respectively.

From the steady state feature maps shown in the rightmost figure of Fig. 4.5.1_2 (b), we know that the original texture patterns have been well described by the new feature maps, which show the diverse distributions in the horizontal and vertical directions for those two patterns. And in Fig. 4.5.1_2 (c), the fitness values of the best, average, and poorest populations are indicated for each generation in an amount of two hundred generations. It can be observed apparently that the fitness values incline towards a steady state after an appropriate number of evolved generations. Fig.

4.5.1_3 describes the convergence curve by the variation of error functions in the training process of GAs between our target texture patterns. We define the error function as the difference of parameters between the current and desired patterns as in (20) in order to observe the convergence condition during the evolution of optimized CNN templates based on GA. The label of x-axis in Fig. 4.5.1_3 is denoted by epochs since the generated populations based on GA should be corresponding to the training process of neural networks. Finally, the distribution of fitness values and convergence curve both show the expected generation which results in the satisfactory classification consequences in experiments.

Fig. 4.5.1_1 Optimized CNN template set for all sixteen texture patterns

(a)

(b)

(c)

Fig. 4.5.1_2 GA training process for our defined CNN template (a) The misclassified texture patterns in the first run (b) The evolved feature maps during the training

phase by GA (c) The corresponding fitness function for the best, average, and poorest populations after 50, 100, and 200 generations accordingly.

Fig. 4.5.1_3 The convergence curve by GA trainings