• 沒有找到結果。

Chapter 3. Proposed Techniques

3.2. Object Grouping

3.2.1. Objective

In KLT algorithm, it just point to point tracking. We know the point location at next frame. But we don’t know the object location. For many tracking method, that can use some methods, like connected component to find the moving objects. So how to group the moving objects is a problem in KLT algorithm. In this section, we describe the methods to group the object. The object grouping includes “Grouping condition”, and “4-direction Gray Level”. In Figure 13, we show the flow chart for object grouping. The method of Object Grouping is shown as Table 3.

18

Algorithm

G

ROUPING

C

ONDITION

Input: KLT Feature Point [i], for i = 0, 1, 2, 3,..., FeatureNo Output: An object Number[k], for k = 0, 1, 2, 3,...

If KLT Feature Point Number > 0 For i=0 to FeatureNo

If Difference (KLT Feature Point [i] ) > TH For j=0 to FeatureNo

If Distance (KLT Feature Point [i], KLT Feature Point [j] ) < TH Number[j] Number[i]

End If End for End if End for End if

Table 3 Grouping Condition Algorithm

Figure 13 Object Grouping flow chart

19

3.2.2. Grouping Condition

The grouping condition is the preprocessing to object grouping. We can use grouping condition to find the moving objects. In Figure 14, that shows the flow chart of grouping condition.

Figure 14 Grouping condition flow chart

We propose two methods to grouping objects. We use the difference of video frame to find the moving objects. And we use the distance to find the feature points on the same objects.

I. Difference

After KLT algorithm, we can get the video frame that have the feature points in Figure 15. In Figure 15, it show the all feature points on video frame. But it has much feature points that we don’t interest. We use KLT algorithm to get feature points, but we just want to get the feature points that is on the moving objects.

After KLT algorithm, we get the less than five hundred feature points. We should get more feature points by KLT algorithm. Because it have stronger information in environment that is on the intersection index, zebra crossing, and some noise. If we want to get more feature points, just change threshold.

20

Figure 15 Video frame after KLT algorithm

After KLT algorithm, we get the less than five hundred feature points. We should get more feature points by KLT algorithm. Because it have stronger information in environment that is on the intersection index, zebra crossing, and some noise. If we want to get more feature points, just change threshold.

In Figure 16, we show the two consecutive frames difference images. By difference, it can find the move objects in original frame.

(a) (b) (c)

Figure 16 (a) original frame1 (b) original frame2 (c) difference frame

We can use the same coordinate to subtraction with the information of gray level of two consecutive frames. We call the difference (I, J) of each frame. If the difference (I, J) is

21

bigger than TH = 30, as shown in (3-1), that is the moving object.

255, ( ( ) ( )) feature points. If the feature points are on the moving objects, we choose the feature points.

Otherwise we delete it. Using the difference result the feature points that is we interested as shown in Figure 17.

(a) (b) (c)

Figure 17 (a) original frame (b) KLT frame (c) difference frame

In Figure 17(b) is the KLT algorithm results that we named KLT frame. After difference we get the result in Figure 17(c) that we named difference frame. In difference frame we can find the feature points is less than KLT frame and all the feature points are on the moving objects. In this step, we significantly reduced the number of feature points. And it does not affect our algorithm.

22

II. Distance

In difference frame, we can find the feature points which are on the moving objects. But we can not know the feature points that are on the same object. We want to group the feature points on the same moving objects. KLT algorithm just feature points to feature points tracking. But in this thesis we want to improve the moving objects tracking based on KLT algorithm. We find the method to solve this problem named distance.

We use the concept that the feature points are close on same moving objects. Through the methods, it can group more objects except the closing objects or occlusion objects. We find the distance of each feature points in (3-2). Through the function, it can find the distance in the video frame of each feature points. Using this results, we define the TH = 10.

(3-2)

Using the above two formulas, we can simply to find the group of feature points. In Figure 18, we show the result by distance named distance frame. If the feature points are on the same object, it has the same color. We defined the four colors to show the group. In KLT algorithm, every feature points have its own number. In distance, if two feature points is group, that the one feature point number is not to change another feature point number is change to its group feature point. After distance, if it have three moving objects in the video frame, and all the feature points into three group that have own number.

tan

23

(a) (b) (c)

Figure 18 (a) original frame (b) difference frame (c) distance frame

It shows all the feature points in difference, but we can't know the group of feature points. After difference, we can find the same groups of feature points have the same color as shown in Figure 18(c).

3.2.3. 4-direction Gray Level

After difference, we know the groups of feature points. Next step we want to show the full of object in the video frame. We define the algorithm named 4- direction gray level as shown in Table 4. The algorithm is based on fuzzy concept, and use relatively small number of calculations to draw the object shape.

The feature points are on the objects after difference. Using the two consecutive frames, we can find the moving objects. In Figure 19, we show the two consecutive frames after KLT algorithm and difference.

24

Table 4 4-direction Gray Level Algorithm

(a) (b)

Figure 19 (a) Frame1 after KLT and difference (b) Frame2 after KLT and difference

25

Figure 20 Frame1 after OptKLT and difference

In the two consecutive frames, the feature points not always the same. The same object in different frame, it has different number feature points as shown in Figure 19. Because the KLT feature points are not stable, we can't use it to check the objects. We use the same frame to improve opencv KLT tracking, the results as shown in Figure 20. The green points are the feature point of this frame. The red points are the guess feature points in the next frame. So we use the results to compare the gray level.

In now frame, choose the feature point A, and feature point A' in next frame. Using two vertical lines through the feature point as shown in Figure 21. Using two lines we can get the four direction of feature point. It have up, down, left and right four directions.

After direction, we have A and A' feature points. First, we through the up direction to compare. We can get the A(x-1,y) and A'(x-1,y),and they have own gray level named GrayA and GrayA'.. And the feature point named Similar(A,A'), as shown in (3-3). The TH means GrayA and GrayA' gray level similar more than 90%.

26

Figure 21 Feature point 4-direction

(3-3)

(3-4)

Through the two vertical lines, it shows the four directions have many rows. In Figure 21, it shows the first row have one pixels, second row have three pixels as show in (3-4). In the now frame up direction first row named RowA and RowA’ is the next frame. If RowA and RowA’ are similar then we go to next row. We get the total pixel number in the row, if more than half pixel number is similar. Then we define RowA and RowA’ are similar.

After 4-direction gray level, we can draw the object more complete as shown in Figure 22. Using 4-direction gray level we can avoid the object shape is not round. Even the object is bizarre shape, we can use this method to solve. In Figure 22(c), we can find the object easily.

From feature points to generate objects, through our algorithm Object Grouping. And we can G(I )-G(I )

27

use results to clustering the object as shown in Figure 23.

(a) (b) (c)

Figure 22 (a) original frame (b) distance frame (c) 4-direction gray level frame

Figure 23 (a) original frame (b) 4-direction gray level frame (c) clustering

相關文件