• 沒有找到結果。

Chapter 4 Experimental Results

4.4 Results of Detecting Vehicles in Video

In this section, the experimental results of detecting vehicles in videos are demonstrated.

We compared the result with J.F. Lee [29], namely AdaBoost + Probabilistic Decision-Based Neural Network (PDBNN) and method which used Gaussian Mixture Model (GMM)[30][31]

to establish background image. The testing videos are composed of daytime and evening traffic video and different scenes. Table 4-3 to Table 4-7 are the statistic results of testing videos. We also present comparison of the frame per second (FPS). Table 4-3 to Table 4-5 are comparison result of daytime testing videos and Table 4-6 to Table 4-7 are the comparison result of evening testing videos. The detection rate and false alarm rate are computed by Equation 4-3 and Equation 4-4 respectively.

(4-3)

(4-4)

Table 4-3 Performance comparison of video at scene 1 (Daytime)

Table 4-4 Performance comparison of video at scene 2 (Daytime)

Place Method Detected

Table 4-5 Performance comparison of video at scene 3 (Daytime)

Table 4-6 Performance comparison of video at scene 1 (Evening)

Place Method Detected

Table 4-7 Performance comparison of video at scene 3 (Evening)

In the daytime, both method in [29] and our proposed system perform well. As for GMM, because the scenes we used have heavy traffic and contain vehicles and motorcycles in the same time, it is hard for GMM method to segment vehicle will and distinguish motorcycle from vehicle. In the evening, our system is better than [29] and [30][31] with higher detection rate and acceptable false alarm rate. The detection rate of [29] decreased when time approach evening because the edge of vehicle become ambiguous. For [30][31], problem generated by motorcycles still exit. What’s more, the light come from lamps of vehicle generated more trouble. As for operation speed, our proposed system is also faster than [29] and near [30][31].

As illustrated in Table 4-3 to Table 4-7, the result is identical to our objective, namely can be applied to real-time application. Figure 4-6 is some capture pictures of detection result.

The red rectangle is the result of proposed system and green rectangle is the result of [29] and yellow rectangle is the result of GMM.

(a)

(b)

(c)

(d)

Fig. 4-6 Capture pictures of video’s detection result

Chapter 5

Conclusions and Future Work

Without using background information, the proposed vehicle detection system is more stable and reliable. The three stages structure of system, namely AdaBoost vehicle detector, false alarm eliminator and stabilizer, simplifies the problems which each stage has to conquer.

At AdaBoost vehicle detector stage, the prime priority is to detect vehicles as many as possible both in daytime and evening. In the other words, AdaBoost vehicle detector only has to focus on detection capability. We use non-converging training to achieve this goal. At the false alarm eliminator stage, the top mission is to filter out the false alarms generated from previous stage. We propose two schemes to deal with false alarms from daytime and evening respectively and use switcher to integrate these two schemes. At stabilizer stage, the purpose is to stabilize the detection results and further erase false alarms. Each stage of system has its main functionality and can perform much well when they are combined together. This paper demonstrates a robust system for vehicle detection which can operate both in daytime and evening and can be applied to real-time applications.

To further improve the performance of our system, some enhancements or trials can be made in the future. Firstly, the system can be expanded to include rear-viewed and side-viewed vehicles. Secondly, the detection ability can be improved to detect vehicles in much darker situation. Thirdly, more complex and efficient tracking algorithm can be integrated to the system because the current tracking algorithm used by stabilizer is simple.

References

[1] T. Zielke, M. Brauckmann, and W. V. Seelen, “Intensity and edge-based symmetry detection with an application to car-following,” CVGIP, Image Underst., vol. 58, no. 2, pp. 177–190, Sep. 1993.

[2] A. Kuehnle, “Symmetry-based recognition of vehicle rears,” Pattern Recognit. Lett., vol.

12, no. 4, pp. 249–258, Apr. 1991.

[3] A. Bensrhair, M. Bertozzi, A. Broggi, P. Miche, S. Mousset, and G. Toulminet, “A cooperative approach to vision-based vehicle detection,” in Proceedings of the 4th IEEE Conference on Intelligent Transportation Systems (ITSC ’01), pp. 207–212, Oakland, Calif, USA, August 2001.

[4] S.D. Buluswar and B.A. Draper, “Color Machine Vision for Autonomous Vehicles,”

Int’l J. Eng. Applications of Artificial Intelligence, vol. 1, no. 2, pp. 245-256, 1998.

[5] D. Guo, T. Fraichard, M. Xie, and C. Laugier, “Color Modeling by Spherical Influence Field in Sensing Driving Environment,” Proc. IEEE Intelligent Vehicle Symp., pp.

249-254, 2000.

[6] N. Matthews, P. An, D. Charnley, and C. Harris “Vehicle Detection and Recognition in Greyscale Imagery,” Control Eng. Practice, vol. 4, pp. 473-479, 1996.

[7] C. Demonceaux, A. Potelle, and D. Kachi-Akkouche, “Obstacle detection in a road scene based on motion analysis,” IEEE Transactions on Vehicular Technology, vol. 53, no. 6, pp. 1649–1656, 2004.

[8] A. Giachetti, M. Campani, and V. Torre, “The Use of Optical Flow for Road Navigation,”

IEEE Trans. Robotics and Automation, vol. 14, no. 1, pp. 34-48, 1998.

[9] J. Collado, C. Hilario, A. de la Escalera, and J. Armingol, “Model Based Vehicle

Color and Shape Features,” Proc. IEEE Int'l Conf. Intelligent Transportation Systems, 2004.

[11] J. Wang, G. Bebis, and R. Miller, “Overtaking Vehicle Detection Using Dynamic and Quasi-Static Background Modeling,” Proc. IEEE Workshop Machine Vision for Intelligent Vehicles, 2005.

[12] A. Khammari, E. Lacroix, F. Nashashibi, and C. Laurgeau “Vehicle Detection Combining Gradient Analysis and AdaBoost Classification,” Proc. IEEE Conferences on Intelligent Transportation Systems, 2005, pp. 66-71.

[13] M. Betke, E. Haritaglu and L. Davis, “Multiple Vehicle Detection and Tracking in Hard Real Time,” IEEE Intelligent Vehicles Symposium, pp. 351–356, 1996.

[14] J. Ferryman, A. Worrall, G. Sullivan, and K. Baker, “A Generic Deformable Model for Vehicle Recognition,” Proceedings of British Machine Vision Conference, pp. 127–136, 1995.

[15] Z. Sun, G. Bebis, and R. Miller, “On-Road Vehicle Detection Using Gabor Filters and Support Vector Machines,” Proc. IEEE Int’l Conf’ Digital Signal Processing, July 2002.

[16] Ruck, D.W., S.K. Rogers, M. Kabrisky, M.E. Oxley and B.W. Suter, “The multilayer perceptron as an approximation to a Bayes optimal discriminant function,” IEEE Trans.

Neural Networks, 1990, 1(4), 296-298.

[17] Z. Sun, R. Miller, G. Bebis and D. Dimeo, “A Real-time Precrash Vehicle Detection System,” IEEE Intelligent Vehicles Symposium 2000. Dearborn, MI, USA.

[18] Chi-Chen Raxle Wang and Jenn-Jier James Lien “Automatic Vehicle Detection Using Local Features – A Statistical Approach” IEEE Transactions on Intelligent Transportation Systems, Vol. 9, No. 1, March 2008.

[19] Paul Viola, Michael J. Jones “Robust Real-Time Face Detection” International Journal of Computer Vision 57(2), 137–154, 2004 Kluwer Academic Publishers.

[20] C. Papageorgiou and T. Poggio, “A trainable system for object detection,” International Journal of Computer Vision, vol. 38, no. 1, pp. 15–33, 2000.

[21] Y. Freund and R. E. Schapire, “Experiments with a new boosting algorithm,” in Proceedings of the 13th International Conference on Machine Learning (ICML ’96), pp.

148–156, Bari, Italy, July 1996.

[22] Pablo Negri, Xavier Clady, Shehzad Muhammad Hanif, and Lionel Prevost “A Cascade of Boosted Generative and Discriminative Classifiers for Vehicle Detection” EURASIP Journal on Advances in Signal Processing Volume 2008, Article ID 782432, 12 pages.

[23] Chris Harris and Mike Stephens, "A Combined Corner And Edge Detector", Forth Alvey Vision Conference, Manchester, UK, pp147-151

[24] P. Viola, M. J. Jones, and D. Snow, “Detecting pedestrians using patterns of motion and appearance.” The 9th ICCV, Nice, France, volume 1, pages 734–741, 2003.

[25] C. Papageorgiou, M. Oren, and T. Poggio "A general framework for object detection,"

International Conference on Computer Vision, 1998.

[26] Rainer Lienhart and Jochen Maydt, "An Extended Set of Haar-like Features for Rapid Object Detection," IEEE ICIP 2002, Vol. 1, pp. 900-903, Sep. 2002.

[27] Luo-Wei Tsai, Jun-Wei Hsieh and Kuo-Chin Fan, "Vehicle Detection Using Normalized Color and Edge Map", IEEE Transactions on Image Processing, Vol. 16, No.3 MARCH, 2007

[28] Wen-Chung Chang and Chih-Wei Cho, "Online Boosting for Vehicle Detection", IEEE Transactions on Systems, Man, and Cybernetics-PART B: Cybernetics, Vol. 40, No.3 JUNE. 2010

[29] Ja-Fan Lee, "A Novel Vehicle Detection System Using Local and Global Features", NCTU, JULY, 2010

tracking", IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999., vol.2, no., pp.-252 Vol. 2, 1999.

[31] Peng S., Yanjiang W., “An improved adaptive background modeling algorithm based on Gaussian Mixture Model”, Signal Processing, 2008. ICSP 2008. 9th International Conference, pp. 1436-1439, 26-29 Oct. 2008.

相關文件