• 沒有找到結果。

Flexibility of ANN Depth Detection Algorithm

Chapter 4 Experimental Results and Discussion

4.2 Experimental Results

4.2.2 Flexibility of ANN Depth Detection Algorithm

The existence of restrictions on Traditional depth detection algorithm is already confirmed and shown in previous subsection. In order to unrestricted, the ANN depth detection algorithm proposed in this thesis.

The ANN architecture for depth detection will evaluate by experimentation. Here, the number of neuron in the output layer need to be decides first. One case is three neurons which respectively corresponding to the world coordinates (X, Y, Z) of the world object point in the output layer and another one is just one neuron for depth Z.

The case IV in the previous subsection is used to treat as the general case in the problem. The training data of the neural network is the same 12 points in left and right of each image, with different distance Z from 65cm to 165cm at an increment of 20cm.

To check the accuracy of the trained network, we presented the network with stereo-pair points that were not completely included in the training set but were from within our range of interest of distance. The testing data is the two same points; they are adjacent, as introduced in the previous subsection in the left and right image with different distance Z from 65cm to 165cm at an increment of 10cm. After the training process had finished, each neural network is tested with the training and testing data sets.

Fig. 4.12 shown the ema in each depth simulated from the net that consists of four input neurons, five hidden neurons and three output neurons. As the diagram indicates, the ema ranges from 0.48455% to 2.4771%. The maximum ema, 2.4771%, represents the error of its corresponding net. Each different number of neuron creates ten distinct nets and the ema of ten nets shown in Fig. 4.13 with five hidden neurons. The average of ema from ten nets with the same number of neurons Hn in hidden layer, ema_Hn, represent the error of Hn neurons where Hn starts form 1 to 10. Fig. 4.14 and Fig. 4.15

show each ema_Hn with one and three output neurons respectively. It is clearly to find that the error range of one output neuron is always greater than three output neurons.

For accuracy, the three output neuron is chosen in the proposed architecture.

65 75 85 95 105 115 125 135 145 155 165

0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4

NN(XYZ)-Error plot, neural number=5, (Error range: 0.48455%~2.4771%)

(cm)

(%)

Fig. 4.12 ema in each depth simulated from the net

1 2 3 4 5 6 7 8 9 10

BP(XYZ) minimun error=2.2748% (neural no.=5) Average=2.9542%, Variance=0.23892%

Average

Fig. 4.13 ema of ten nets with five hidden neurons

0 5 10 15

neural number in hidden layer

%

BP(Z) minimun error=20.3715% (neural number=3 )

error average

Fig. 4.14 Each ema_Hn with one output neuron

0 5 10 15

neural number in hidden layer

%

BP(XYZ) minimun error=2.9542% (neural number=5 )

error average

Fig. 4.15 Each ema_Hn with three output neurons

After decide the number of output neuron, the number in the hidden layer is proceeded to be resolved. From Fig. 4.15 also tells us that the best choice for neuron number in the hidden layer in the problem is five. Therefore, we may reasonably conclude that the better MLP network architecture for detecting depth should be consists of four input neurons, five hidden neurons and three output neurons.

In order to eliminate the net that doesn’t train the training data successfully, a threshold value T of ema from training data need to be set. If the ema form training data of the net is large than T, the net will not be enrolled. The setting value H must be large than the ema from training data such that the network could.

The error results can be seen for different case introduced in subsection from proposed net is shown from Fig. 4.16 to Fig. 4.19. They can be noted that even in the worst case, the error in depth computation was still well below 4%.

Case I

1 2 3 4 5 6 7 8 9 10

0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8

training net

%

BP(XYZ) minimum error=1.9753% (neural no.=5) Average=2.2439%

Average for ANN Average for tradition

Fig. 4.16 Each ema_Hn from proposed net of Case I Case II

1 2 3 4 5 6 7 8 9 10

BP(XYZ) minimum error=1.7435% (neural no.=5) Average=2.1169%

Average for ANN Average for tradition

Fig. 4.17 Each ema_Hn from proposed net of Case II Case III

BP(XYZ) minimum error=1.4234% (neural no.=5) Average=1.5368%

Average for ANN

Fig. 4.18 Each ema_Hn from proposed net of Case III

BP(XYZ) minimum error=2.2748% (neural no.=5) Average=3.2395%

Average for ANN

Fig. 4.19 Each ema_Hn from proposed net of Case IV

The training data is obtained from stereo pair manually. The reason for decreasing the number of inputs was to determine whether the desired depth values could still be achieved with acceptable accuracy. Therefore, if the training data is only obtained from Z=65 and 165cm, the testing error average is 53.0081% as shown in Fig. 4.20. Fig. 4.21 and Fig. 4.22 are shown the testing error of training data is obtained from Z=65, 115, 165cm and Z=65, 95, 135, 165cm respectively. The Fig.

4.22 indicates that the training data with four distinct depths can get the result with error around 5%.

1 2 3 4 5 6 7 8 9 10

BP(XYZ) minimun error=21.7075% (neural no.=5) Average=53.0081%, Variance=82.5972%

Average

Fig. 4.20 The testing error of training data is obtained from Z=65 and 165cm

1 2 3 4 5 6 7 8 9 10

BP(XYZ) minimun error=12.3236% (neural no.=5) Average=31.1636%, Variance=65.2786%

Average

Fig. 4.21 The testing error of training data is obtained from Z=65, 115, 165cm

1 2 3 4 5 6 7 8 9 10 2

4 6 8 10 12 14 16

training net

%

BP(XYZ) minimun error=2.9263% (neural no.=5) Average=5.412%, Variance=187.9889%

Average

Fig. 4.22 The testing error of training data is obtained from Z=65, 95, 135, 165cm The algorithm is different form traditional detection depth algorithm in the sense that no extrinsic or intrinsic camera parameters are found for any of the camera. The system is trained such that it learns to directly find the depth of objects.

Chapter 5

相關文件