• 沒有找到結果。

Experimental Results of Object Detection

Chapter 5. Experimental Results

5.1. Experimental Results of Object Detection

In the following, we show our experimental results of object detection. At first, we present the environment of the original frame as the left image of the raw in Figure 32, and show the object detection algorithm as the right image of the raw in Figure 32. In Figure 32, they include many type of object.

(a) (b) Figure continues

39

(c) (d)

(e) (f)

(g) (h) Figure continues

40

(i) (j)

(k) (l)

Figure 32 Experimental results of object detection

5.2. Experimental Results of Moving Object Positioning

Here, we demonstrate some object tracking examples. At first, we present the environment of the original frame as the up image of the raw in Figure 33 to Figure 40, and show the object tracking algorithm as the down image of the raw in Figure 33 to Figure 40. It has normal case and some special case. In Figure 34 and Figure 35 is the same video with Figure 36. But Figure 34 is used Kalman Filter and Figure 35 is used mean-shft. In Table 8, we

41

show the comparison algorithm between reference and ours. In Table 9, we show the comparison algorithm between Kalman filter and ours.

(a) (b) (c)

(d) (e) (f)

Figure 33 Experimental results of object tracking (a) frame1 (b) frame4 (c) frame7 (d) frame11 (e) frame15 (f) frame30. It is a normal case include car and monocycle

42

Figure 34 Experimental results of object tracking by KalmanFilte

Figure 35 Experimental results of object tracking by Mean-shift

43

(a) (b) (c)

(d) (e) (f)

Figure 36 Experimental results of object tracking (a) frame380 (b) frame400 (c) frame450 (d) frame500 (e) frame550 (f) frame650. It is a special case about the white car that is green line, it is rotation.

44

(a) (b) (c)

(d) (e) (f)

Figure 37 Experimental results of object tracking (a) frame800 (b) frame820 (c) frame830 (d) frame840 (e) frame850 (f) frame900. It is special case about the two cars have the same velocity in frame800 to 820, but in frame840 they have different velocity in frame840 we can see the two cars locus that is yellow and green.

45

(a) (b) (c)

(d) (e) (f)

Figure 38 Experimental results of object tracking (a) frame2000 (b) frame2050 (c) frame2100 (d) frame2150 (e) frame2200 (f) frame225. It is a special case. That is many object enter the frame in the same time. At first they have the same velocity and they are very close. But in few frame, we can distinguish them.

46

(a) (b) (c)

(d) (e) (f)

Figure 39 Experimental results of object tracking (a) frame2000 (b) frame2050 (c) frame2100 (d) frame2150 (e) frame2200 (f) frame225. It is a special case. when occlusion occur between two cars. But we still can distinguish two cars.

47

(a) (b) (c)

(d) (e) (f)

Figure 40 Experimental results of object tracking (a) frame2000 (b) frame2050 (c) frame2100 (d) frame2150 (e) frame2200 (f) frame225. It is a special case. That is a track and motorcycle. We just can find the track. Because they have the same velocity. But the track is not easy to distinguish in other tracking methods.

48

Mean-shift Kalman filter KLT UVLAB

Based Gradient + Feature Gradient + Feature Gradient Gradient + Learning

Color Yes No No No

Create background No Yes No No

Object boundary No No No Yes

Object tracking Yes Yes No Yes

Occlusion sometimes No No Objects have

been separated

FPS Real time Real time Real time 28

Table 8 Comparison Table

Sample1 Sample2

Object numver Kalman fikter UVLAB Kalman fikter UVLAB

Tracking object 97 97 29 29

Tracking rate 64 85 18 27

False alarm 66% 87% 62% 93%

Object tracking 0 0 20 0

Kalman filter will use 400 frames to crate background

Table 9 Comparison Table with Kalman filter

49

Chapter 6. Conclusion

6.1. Achievements

We present a new method of efficient object tracking real-time performance. At first, we use the KLT algorithm to find the KLT feature points, and then perform the object detection algorithm include object grouping. In the algorithm, we utilize the difference, distance, and 4-direction gray level in the detection object. In our tracking, we don’t need to establish background. So we are not afraid to environment change. And avoid waste time to establish background.

In this paper, we proposed a new method which combines gradient based and learning based. Therefore, we enhance the preciseness of the system. In addition, our algorithm has modified most disadvantages of the feature based. Firstly, we proposed the KLT algorithm.

We just search the corner of the objects. So it can avoid the special feature just on special objects. Secondly, we solve the some problem of most tracking methods disadvantages. And our algorithm time complexity is low latency.

Experiments were conducted on different scenes which including the different type of objects and special case. The system succeeds in detecting and tracking the normal case. Our system can also accommodate the big car, occlusion, and rotation etc. Through the experiment, we realize the overall excursion time is very swift. In the future, we believe that when the system ports to the platform; it can achieve real-time performance as well.

50

6.2. Future Works

To further improve the performance and the robustness of our algorithm, some enhancements or trials can be made in the future. Firstly, if the object it too small the KLT feature points can’t be searched. Therefore, we can choose another feature to help the case.

And we use the difference to delete KLT feature points. If object stay in the same place, we can’t find it. So how to save the KLT feature points and hedge the object is leave or stay that is a good problem to solve.

After accomplishing the overall functions mentioned above, the system will very applicable and we will transplant from PC based to DSP platform for more commercial certification and appealing.

51

References

[1] P. Tissainayagam, D. Suter, Assessing the performance of corner detectors for point feature tracking applications, Image and Vision Computing 22 (2004) 663–679

[2] Konstantinos G. Derpanis, Mean Shift: Construction and Convergence Proof, November 26, 2006

[3] Valtteri Takala and Matti Pietik¨ainen, Multi-Object Tracking Using Color, Texture and Motion, 2007.

[4] Helmut Grabner, Michael Grabner, Horst Bischof, Real-Time Tracking via On-line Boosting

[5] Jifeng Ning, Lei Zhang, David Zhang, Chengke Wu, Robust Object Tracking Using Joint Color-Texture Histogram, International Journal of Pattern Recognition and Artificial Intelligence Vol. 23, No. 7 (2009) 1245ttern

[6] R. Venkatesh Babu, Patrick Peern , Patrick Bouthemy, Robust tracking with motion estimation and local Kernel-based color modeling, Image and Vision Computing 25 (2007) 1205local

[7] Jean-Yves Bouguet, Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm

[8] Corner Detection and Tracking, Computer Vision CITS4240

[9] http://www.gps-navigations.net/wp-content/uploads/2010/09/Gps-Tracking-System.jpg [10] Worldwide GPS Tracker with Two Way Calling, SMS Alerts, and More

[11] http://www.opencv.org.cn/

[12] http://en.wikipedia.org/wiki/Kalman_filter

[13] Greg Welch , Gary Bishop , An Introduction to the Kalman Filter, UNC-Chapel Hill, TR 95-041, July 24, 2006

52

[14] Elaine Pettit and Dr. Kathleen M. Swigger, An Analysis of Genetic-based Pattern Tracking and Cognitive-based Component Tracking Models of Adaptation, AAAI-83 Proceedings

[15] Zhixu Zhao, Shiqi Yu, Xinyu Wu, Congling Wang, Yangsheng Xu, A Multi-target Tracking Algorithm using Texture for Real-time Surveillance, 2008 IEEE International Conference on Robotics and Biomimetics Bangkok, Thailand, February 21 - 26, 2009.

[16] Xian WU, Lihong LI, Jianhuang LAI, Jian HUANG, A Framework of Face Tracking with Classification using CAMShift-C and LBP, 2009 Fifth International Conference on Image and Graphics

[17] Carlo Tomasi, Takeo Kanade, Detection and Tracking of Point Features, Technical Report CMU-CS-91-132 April 1991

[18] Tracking features using the KLT algorithm, VGIS 821 April 21, 2009

[19] KLT: An Implementation of the Kanade-Lucas-Tomasi Feature Tracker, http://www.ces.clemson.edu/~stb/klt/

[20] L. Kitchen, A. Rosenfeld, Gray level corner detection, Pattern Recognition Letters (1982) 95–102

[21] C.G. Harris, M.J. Stephens, Combined corner and edge detector, In Proceedings of the Fourth Alvey Vision Conference, Manchester 147–151, 1988

[22] S.M. Smith, J.M. Brady, SUSAN—a new approach to low level image processing, International Journal of Computer Vision 23 (1) 45–78, 1997

[23] C. Tomasi, T. Kanade, Detection and Tracking of Point Features, Carnegie Mellon University, Tech. Report CMU-CS-91-132, April, 1991.

相關文件