• 沒有找到結果。

Part I: Result of Each Steps

Chapter 4 Experiments

4.1 Part I: Result of Each Steps

In the previous chapters, three main steps of the proposed system are introduced.

In this part, the experiment results of each step will be expressed and character recognition results will be separated into feature extraction and classification.

4.1.1 Result of Potential Object Localization

For this system, there are three steps in potential object localization. Detection of moving object in special color, morphology operation and connected components labeling are used in to classify the potential region. First, Detection of moving object in special color extract target pixels from two input color image with time difference, which with the size 320x240 pixels. Two input color images are shown in Fig. 4.1(a) and Fig. 4.1 (b), the result image is shown in Fig. 4.2. After detection of moving object in special color extraction, there are still many noise in the scene, to reduce the noise, morphology operation is applied to eliminate small area and to clear the connect region as shown in Fig. 4.3(a),(b). To find out the potential regions, the CCL is applied as shown in Fig. 4.4 and more example for moving object extraction are shown in Fig. 4.5.

37

(a) (b)

Fig. 4.1(a)Input image (T=t-1). (b)Input image (T=t)

Fig. 4.2 Detection of moving object in special color.

(a) (b) Fig. 4.3 (a) Erosion. (b) Dilation.

Fig. 4.4 CCL

38

Fig. 4.5 More example for moving object extraction. (a) Detection of moving object in special color. (b) Result after morphology operation. (c) Result after CCL

4.1.2 Result of Character extraction

After find out the potential regions, the next step is to extract the regions from the original frame with the same potential regions, shown in Fig. 4.6(a). In character extraction, the result will show how to find the character in potential regions. The first step in character extraction used color as the feature to find the plate as shown in Fig.

4.6(b). After color extraction, noise reduce is shown in Fig. 4.6(c) and image filled is shown in Fig. 4.6(d).We subtract the plate with the image filled plate and using CCL classify the largest object to character. The result is shown in Fig. 4.6(e) and more examples for character extraction are shown in Fig. 4.7.

39

(a) (b) (c) (d) (e) Fig. 4.6 (a) Potential region(b)color extraction.

(c)Reduce the noise. (d) Filled image. (e) Character extraction.

(a) (b) (c) (d) (e) Fig. 4.7 More example for character extraction. (a) Potential region.

(b) Color extraction. (c)Reduce the noise. (d) Filled image. (e) Character extraction.

4.1.3 Result of Feature extraction

In this section, there are 4 cases which would be discussed for the feature extraction and the character “4” will used as an example. In case 1, consider the feature with 9 different angles from angle -40 to angle 40 with the same image size which is 315×300, the results in case 1 are shown in Table 4.1. In Case 2, consider the feature with different scale from 540×488 to 63×61 with the same aspect ratio, the results in case 2 are shown in Table 4.2. In Case 3, consider the feature with different

40

aspect ratio from shorten the axis and the results in case 3 are shown in Table 4.3. As Table 4.1 shown, the parameters I1~I12 are rotation-invariant. Table 4.2 shows, when the pattern is large enough, parameter’s values will be very similar. Therefore, the parameters can achieve invariance under translation, rotation, and scaling. As shown in Table 4.3, although the value of will greatly change with the different aspect ratio.

With the different proportion as the value of I2 and I3, I8~I12 may change as well . Therefore, when the character is different, even if different angle of rotation, different sizes or aspect ratio, the proposed system still can excellent to recognize which character it is.

Table. 4.1 Same image size with different angle.

Table. 4.2 Same aspect ratio with different scale.

41

Table. 4.3 Different aspect ratio with the original size is 315×300

4.1.4 Result of Classification

The experiment results as shown in Fig. 4.8, the binary pictures are the characters after extraction and the classification results are at the left side. As shown in pictures, even if different angle of rotation, different sizes, noises or aspect ratio, the proposed system can excellent to recognize which character it is.

Fig. 4.8 Example for classification.

42

To compare the performance of the classification and then discusses 2 cases concerning the classification. In Case 1, consider the accuracy rates with 3 kinds of different training data. In Case 2, consider the accuracy rates with 2 kinds of different input neurons.

For training the Character-recognition neural network or CRNN in short, the thesis uses 137700 training data as shown in Fig. 4.9, with different size, rotation, translation, tilt and different aspect ratio and 6000 testing data. The accuracy rate is shown in Table 4.4. CRNN2 and CRNN3 are using the same structure with different training data. Clearly, the more training data will get the higher accuracy rate.

In Case 2, two kinds of CRNN are discussed, the CRNN4 is using the features without neglected the number of arcs of 8-th circle outside the character and CRNN5 is using the features with neglected the number of arcs of 7-th and 8-th circle outside the character. Clearly, although the error in CRNN4 is fewer, there are almost no different in accuracy rate between CRNN and CRNN4. Consider to neglected another feature, neglected the I12 can get the best accuracy rate is 99.0%.

Fig. 4.9 Examples of training data for character recognition.

43

Table. 4.4 Different training data with CRNN

training data testing data error Accuracy rate

CRNN 137700 6000 232 99.6%

CRNN2 85262 6000 687 88.5%

CRNN3 23000 6000 1344 77.6%

Table. 4.5 Different condition with CRNN

Condition training

Three intelligent neural networks are respectively proposed to detect moving word card in special color, to extract color and to recognize characters. In the research, we find out that supervised learning neural networks can replace the algorithm like morphology operation and detect moving object in two or more special colors.

Fig.4.10 shows the system flowchart. Blue colors are the parts we use with intelligent neural networks and the yellow parts can be used either. The experiment result about morphology operation with intelligent neural networks will be expressed, considering the executing time, we don’t use all of the intelligent neural networks.

相關文件