• 沒有找到結果。

Performance of Compensation with Inner Feedback-Loop Integrator

4. Experimental Results

4.2. Performance of Compensation with Inner Feedback-Loop Integrator

In this section, the performance of the proposed DIS technique is evaluated and compared to other existing DIS methods based on the performance indices of motion estimation and motion smoothing, respectively. To do this, eight real video sequences, captured by hand-held and in-car cameras with various irregular conditions, are used for testing. Each video sequence has a resolution of 640x480. VS#1 is a video taken of books on a bookshelf that has constant and intermittent panning in the horizontal direction. Obviously, there is a lack of features in the vertical direction. VS#2 is a video taken of a forest with constant panning, so it has the effects of hand shaking in both the horizontal and vertical directions. VS#3 is a video taken of a child, which contains a large moving object and displays the effects of hand shaking as well. VS#4 is a video taken of a car that has poor image quality and tremendous fluctuation. VS#5 is a video taken of a gate that has constant camera motion and jitter. In this sequence, there is a lack of features in the horizontal direction. VS#6 is a video taken of a community road during a bumpy condition. VS#7 is a video taken of a highway with jitter. VS#8 is a video taken of a car turning in a parking lot.

The motion estimation performance is evaluated based on: the root mean square error (RMSE) between the algorithmically estimated motion vectors and the desired motion vectors evaluated by human visual perception as well as considering the background factor frame by frame. The RMSE is given by generated from the evaluated DIS algorithms.

The proposed method is compared to a RPM approach with fuzzy set theory (RPM_FUZZY). The motion estimation results of these two methods are summarized in Table 4.1. The result with respect to VS#1 shows that the proposed method is superior to the RPM_FUZZY method since the proposed technique applies the minimum projection approach and the inverse triangle method to detect the irregular components of LMVs and then recombines available MVs to form an IMV. The result with respect to VS#3 shows that the GMV evaluated by the proposed background evaluation scheme can avoid the influence of large objects in motion. In VS#4, the higher RMSE indicates that some frames with

tremendous fluctuation in the image sequence are out of the MV detection range and include more rotation components as well. However, the proposed technique still performs better than the RPM_FUZZY method on this video sequence. VS#5 lacks for feature in the horizontal direction such that only one component of motion vector is reliable (see Fig. 2.5 (b)). The proposed method applies the minimum projection approach and the inverse triangle method to detect the irregular components of LMVs and then recombines available MVs to form an RMV. This approach can sufficiently use the existing information to estimate the global motion vector. The testing result with respect to VS#5 shows that the RMSE reduces from 5.8348 to 2.5269 by using our method since the RPM_FUZZY method did not consider the condition of lack of feature. The results with respect to VS#6~8 also show that RMSEs of our method are superior to the RPM_FUZZY method since the resultant GMV through the adaptive background-based evaluation can avoid the influence of large moving objects. The irregular components of motion vectors are also considered here as well. According to these experiments, the proposed technique is more robust than the RPM_FUZZY method is in dealing with video sequences with irregular conditions, such as a lack of features, large moving objects, and poor image quality.

The motion smoothing performance is evaluated by the smoothness index (SI) proposed in Section 3.2.2. Fig. 4.5(c) shows the original motion trajectory vs. the compensated motion trajectory generated by the proposed method. Compared with Figs. 4.5(a) and (b), the proposed method can reduce the steady-state lag of the compensated motion trajectory in constant motion condition and keep the CMVs in an appropriate range.

Table 4.2 shows the SI comparisons of three CMV generation methods presented in Fig.

4.5. The generation of CMV without clipper is impractical since it loses too much effective image area, i.e., the maximum of the CMVs does not guarantee to fit the practical compensation range. The proposed CMV generation method dramatically reduces the SI value from 5.6482 to 0.9346 compared with the CMV generation without the integrator. The reason is that the effect of the inner feedback-loop integrator greatly reduces the steady-state lag in the image sequence with constant motion.

Table 4.1.

RMSE comparisons of RPM_FUZZY and the proposed method with respect to eight real video sequences.

Real video sequences Method

VS#1 VS#2 VS#3 VS#4 VS#5 VS#6 VS#7 VS#8 RPM_FUZZY 2.5729 2.4166 2.3958 6.0469 5.8348 0.8031 2.6618 2.2749

The proposed method 0.2449 0.7280 1.2369 2.5632 2.5269 0.3536 1.6837 0.5701

Table 4.2.

SI comparisons of three CMV generation methods.

Methods SI Max. CMV value (pels)

Eq. (3.1) 0.7990 134

Eq. (3.1) with clipper 5.6482 47 The proposed CMV

generation method (Eq. (3.6))

0.9346 47 Note: The original SI is 7.4372. The clipper is bounded

within ±47pels.

0 50 100 150 200

Fig. 4.5. Performance comparison of three different CMV generation methods applied to a video sequence with panning and hand shaking. (a) CMV generation method in Eq. (3.1). (b) CMV generation method in Eq. (3.1) with clipper in Eq. (3.5). (c) The proposed method in Eq.

(3.6).

We also evaluate the CMV generation methods by four GMV sets generated from real video sequences (GMV sets #1~4). Fig. 4.6 shows the comparison of original and compensated motion trajectories by using two different CMV generation methods, Eq. (3.1) with clipper and Eq. (3.6), with respect to these four GMV sets. The parameter settings in Eq.

(3.1) with clipper and Eq. (3.6) are listed in Table 4.3. The k is set as the same value for both horizontal and vertical directions. This means they have the same shaking absorption effects. The parameter β is the inner feedback-loop integral gain and it can determine the speed of the steady lag elimination during constant motion. The gain should not be too high to avoid resonance. In the in-car DIS applications, the constant motion occurs more frequently in the horizontal direction than in the vertical direction. Therefore, we set a higher gain for β in the horizontal direction to get a better visual quality.

In each subfigure, the dotted line, solid line, and dashed line indicate the original trajectory and the compensated CMV trajectories are indicated by Eq. (3.6) and Eq. (3.1) with clipper, respectively. The GMV sets #1 and #2 (Fig. 4.6(a) and (b)) are estimated from video sequences with constant motion in the images. The GMV set#3 (Fig. 4.6(c)) is estimated from VS#3. The GMV set#4 (Fig. 4.6(d)) is estimated from VS#4. According to the results, the compensated horizontal motion trajectories of GMV set#1, set#2 and set#4, all of which have more constant motion in the images and were generated by the proposed CMV generation method, are, when compared to others, closer to the original horizontal motion trajectories.

This means that the proposed method can reduce the steady-state lag and provides more space to absorb the shaking effect of image sequences without violating the physical range limitation. The GMV set #3 is estimated from the video captured in the highway. The result of the method with the integrator is that there is a slight overshooting phenomenon. This is evident when it is compared to the method without the integrator. This is the intrinsic property of adding an integrator in the process loop. But it is a good trade-off since it can greatly reduce the steady lag of motion trajectory. Table 4.4 shows the SI comparisons corresponding to Fig. 4.6. The original SIs can be regarded as the smoothness index of the original sequences with constant motion and undesired shaking components. In general, the proposed CMV generation method has better motion smoothing performance than the approach without the integrator on the compensation of most real video sequences with constant motion. The experimental results show that the proposed method can deal with various circumstances and has better performance in quantitative evaluations (such as RMSE and SI) and in human visual evaluation, too.

0 50 100 150 200

Fig. 4.6. Comparisons of original and compensated motion trajectories by two different CMV generation methods (with and without integrator) with respect to (a) GMV set #1, (b) GMV set #2, (c) GMV set #3, (d) GMV set #4.

Table 4.3.

The parameters applied to CMV generation with different equations

Parameters Method

(Equation) k β Clipper Limit

18 0.95

SI comparisons of two different CMV generation methods with respect to four different GMV sets.

SI Video sequences Eq. (3.1)

with clipper Eq.(3.6)

相關文件