• 沒有找到結果。

Chapter 3 Vehicle Detection System

3.4 Stabilizer

3.4.2 Tracking Step

After confirming the detection rectangle is actually vehicle object, we draw this detection

rectangle no matter that it is missed by AdaBoost vehicle detector or filtered out by false alarm eliminator. As long as the detection rectangle is confirmed as vehicle object, the detection rectangle will be drew until the vehicle is out of sight. By this step, we can avoid twinkling detection rectangle. If the same detection rectangle is generated by AdaBoost vehicle detector and pass the false alarm eliminator again, we update the position information and size of this detection rectangle and then draw the detection rectangle. Otherwise, we draw the detection rectangle by the information stored in survival list and use simple tracking algorithm to update the position. Figure 3-22 is the diagram of this step.

Figure 3-22 Diagram of tracking step

Remember that we only draw the detection rectangle which is confirmed as vehicle

detection iteration, namely every frame finishes, we examine the survival condition of every detection rectangle in the survival list. If the detection rectangle is out of sight or it does not meet the survival criterion, the detection rectangle should be deleted.

Chapter 4

Experimental Results

The vehicle detection system is implemented on a PC system. The CPU and RAM of the PC is Intel Core 2 Duo @ 2.9G and 2GB RAM respectively. The integrated development environment is Microsoft Visual Studio 2008 on Windows XP OS. The inputs are video files (uncompressed AVI) or images (PPM format). These inputs were captured with a DV at traffic intersection or testing samples which were used by other research.

Section 4.1 illustrates the training process of AdaBoost, including the training dataset and the comparison of non-converging training at different layer number. Section 4.2 shows the results of detection in static image with the comparison of other researches. The testing images are obtained from a public testing database – MIT CBCL car database 1999. Section 4.3 demonstrates the experimental results of switcher. Section 4.4 illustrates the experimental results of detecting vehicles in videos.

4.1 AdaBoost Training

We collected our training data by manually extracting samples from videos. There are 3431 positive samples and 11133 negative samples. Both the positive samples and negative samples contain daytime and evening samples. All the training samples are transformed into gray-level image. Because the samples are collected manually, they do not have the same size.

Therefore we normalized the sample to 22 x 18. The weak classifiers used here are the permutation of the type, position and scale of 15 Haar-like features. Figure 4-1 demonstrates some samples of positive and negative samples and Figure 4-2 is the flow chart of the training

(a-1)

(a-2)

(a) Positive samples of (a-1) Daytime (a-2) Evening

(b-1)

(b-2)

(b) Negative samples of (b-1) Daytime (b-2) Evening Figure 4-1 Some samples of training samples

Figure 4-2 Flow chart of AdaBoost training process

As mentioned at section 3.2.5, we used non-converging training method to train our AdaBoost vehicle detector. Table 4-1 is the statistical result of different layer number. The testing data is MIT CBCL car database. The criteria of performance measurement are defined in Equation 4-1 and 4-2 [18].

(4-1)

(4-2)

Table 4-1 Comparison between different AdaBoost vehicle detector

# of layers

14 13 11 10 9 8

Number of weak classifiers

592 477 318 263 210 179

Detection

Rate 61.18% 68.63% 85.71% 94.41% 98.14% 98.76%

False Alarm

Rate 0% 0.00008% 0.0008% 0.0014% 0.003% 0.0058%

Obviously, the less number of the layer is, the higher the detection rate is. It is because when we decreased the number of the layer, we actually decreased the complexity of AdaBoost decision rules. Look at the last two columns, namely AdaBoost vehicle detector with 9 layers and 8 layers. Although, both of these two AdaBoost vehicle detector’s detection rate are higher enough, they can be distinguish from each other by false alarm rate. When we decrease the layer number from 9 to 8, the detection rate increases 0.62%, but the false alarm rate also increases 0.0028%. The price of increasing only 0.62% detection rate is too high and it also means that we have to deal with much more false alarms. Therefore, in this study, the layer number of AdaBoost vehicle detector is 9.

4.2 Results of Detecting Vehicles in Static Image

In the MIT CBCL car database, each image was extracted from raw data and was scaled

to the size 128x128 and aligned so that the car was in the center of the image. There are few researches that provided the experimental result of public frontal-viewed car database. So far, R. Wang et al. [18] provided their experimental result of MIT CBCL car database. We also implemented the method proposed in J.F. Lee [29]. Therefore, we compared the experimental result of [18] and [29] with that of the proposed system and the comparison results are presented in Table 4-2 and some detection results are presented in Figure 4-3. Because the MIT CBCL database consists of daytime image, we only use daytime scheme without size filter, namely AdaBoost + edge complexity, to test the performance. The criteria of performance measurement are also Equation 4-1 and 4-2.

Table 4-2 Performance comparison of MIT CBCL

PCA + ICA AdaBoost + PDBNN Proposed System

Detection Rate 95% 91.93% 96.27%

False Alarm Rate 0.002% 0.0031% 0.0015%

Figure 4-3 Some detection results of MIT CBCL car database

4.3 Results of Switcher

First, we show that why we have to design a switching algorithm, Figure 4-4 is the statistic charts of two false alarm eliminating schemes respectively. The purple line labeled with Edge is the daytime scheme and the blue line labeled with Hist is the evening scheme.

The video sequence is in the time interval from daytime to evening.

Figure 4-4 Statistic charts of two false alarm eliminating schemes

As illustrated in Figure 4-4, the detection capability of daytime scheme is falling down when time approached evening. Although the evening scheme has high detection rate in the daytime but its false alarm rate is too high. Figure 4-4 explains why we need two schemes to handle false alarms in different time interval and why we have switcher to combine these two schemes. The Figure 4-5 demonstrates the result after integrating the switching algorithm.

Figure 4-5 Statistic chart of integrating switcher

4.4 Results of Detecting Vehicles in Video

In this section, the experimental results of detecting vehicles in videos are demonstrated.

We compared the result with J.F. Lee [29], namely AdaBoost + Probabilistic Decision-Based Neural Network (PDBNN) and method which used Gaussian Mixture Model (GMM)[30][31]

to establish background image. The testing videos are composed of daytime and evening traffic video and different scenes. Table 4-3 to Table 4-7 are the statistic results of testing videos. We also present comparison of the frame per second (FPS). Table 4-3 to Table 4-5 are comparison result of daytime testing videos and Table 4-6 to Table 4-7 are the comparison result of evening testing videos. The detection rate and false alarm rate are computed by Equation 4-3 and Equation 4-4 respectively.

(4-3)

(4-4)

Table 4-3 Performance comparison of video at scene 1 (Daytime)

Table 4-4 Performance comparison of video at scene 2 (Daytime)

Place Method Detected

Table 4-5 Performance comparison of video at scene 3 (Daytime)

Table 4-6 Performance comparison of video at scene 1 (Evening)

Place Method Detected

Table 4-7 Performance comparison of video at scene 3 (Evening)

In the daytime, both method in [29] and our proposed system perform well. As for GMM, because the scenes we used have heavy traffic and contain vehicles and motorcycles in the same time, it is hard for GMM method to segment vehicle will and distinguish motorcycle from vehicle. In the evening, our system is better than [29] and [30][31] with higher detection rate and acceptable false alarm rate. The detection rate of [29] decreased when time approach evening because the edge of vehicle become ambiguous. For [30][31], problem generated by motorcycles still exit. What’s more, the light come from lamps of vehicle generated more trouble. As for operation speed, our proposed system is also faster than [29] and near [30][31].

As illustrated in Table 4-3 to Table 4-7, the result is identical to our objective, namely can be applied to real-time application. Figure 4-6 is some capture pictures of detection result.

The red rectangle is the result of proposed system and green rectangle is the result of [29] and yellow rectangle is the result of GMM.

(a)

(b)

(c)

(d)

Fig. 4-6 Capture pictures of video’s detection result

Chapter 5

Conclusions and Future Work

Without using background information, the proposed vehicle detection system is more stable and reliable. The three stages structure of system, namely AdaBoost vehicle detector, false alarm eliminator and stabilizer, simplifies the problems which each stage has to conquer.

At AdaBoost vehicle detector stage, the prime priority is to detect vehicles as many as possible both in daytime and evening. In the other words, AdaBoost vehicle detector only has to focus on detection capability. We use non-converging training to achieve this goal. At the false alarm eliminator stage, the top mission is to filter out the false alarms generated from previous stage. We propose two schemes to deal with false alarms from daytime and evening respectively and use switcher to integrate these two schemes. At stabilizer stage, the purpose is to stabilize the detection results and further erase false alarms. Each stage of system has its main functionality and can perform much well when they are combined together. This paper demonstrates a robust system for vehicle detection which can operate both in daytime and evening and can be applied to real-time applications.

To further improve the performance of our system, some enhancements or trials can be made in the future. Firstly, the system can be expanded to include rear-viewed and side-viewed vehicles. Secondly, the detection ability can be improved to detect vehicles in much darker situation. Thirdly, more complex and efficient tracking algorithm can be integrated to the system because the current tracking algorithm used by stabilizer is simple.

References

[1] T. Zielke, M. Brauckmann, and W. V. Seelen, “Intensity and edge-based symmetry detection with an application to car-following,” CVGIP, Image Underst., vol. 58, no. 2, pp. 177–190, Sep. 1993.

[2] A. Kuehnle, “Symmetry-based recognition of vehicle rears,” Pattern Recognit. Lett., vol.

12, no. 4, pp. 249–258, Apr. 1991.

[3] A. Bensrhair, M. Bertozzi, A. Broggi, P. Miche, S. Mousset, and G. Toulminet, “A cooperative approach to vision-based vehicle detection,” in Proceedings of the 4th IEEE Conference on Intelligent Transportation Systems (ITSC ’01), pp. 207–212, Oakland, Calif, USA, August 2001.

[4] S.D. Buluswar and B.A. Draper, “Color Machine Vision for Autonomous Vehicles,”

Int’l J. Eng. Applications of Artificial Intelligence, vol. 1, no. 2, pp. 245-256, 1998.

[5] D. Guo, T. Fraichard, M. Xie, and C. Laugier, “Color Modeling by Spherical Influence Field in Sensing Driving Environment,” Proc. IEEE Intelligent Vehicle Symp., pp.

249-254, 2000.

[6] N. Matthews, P. An, D. Charnley, and C. Harris “Vehicle Detection and Recognition in Greyscale Imagery,” Control Eng. Practice, vol. 4, pp. 473-479, 1996.

[7] C. Demonceaux, A. Potelle, and D. Kachi-Akkouche, “Obstacle detection in a road scene based on motion analysis,” IEEE Transactions on Vehicular Technology, vol. 53, no. 6, pp. 1649–1656, 2004.

[8] A. Giachetti, M. Campani, and V. Torre, “The Use of Optical Flow for Road Navigation,”

IEEE Trans. Robotics and Automation, vol. 14, no. 1, pp. 34-48, 1998.

[9] J. Collado, C. Hilario, A. de la Escalera, and J. Armingol, “Model Based Vehicle

Color and Shape Features,” Proc. IEEE Int'l Conf. Intelligent Transportation Systems, 2004.

[11] J. Wang, G. Bebis, and R. Miller, “Overtaking Vehicle Detection Using Dynamic and Quasi-Static Background Modeling,” Proc. IEEE Workshop Machine Vision for Intelligent Vehicles, 2005.

[12] A. Khammari, E. Lacroix, F. Nashashibi, and C. Laurgeau “Vehicle Detection Combining Gradient Analysis and AdaBoost Classification,” Proc. IEEE Conferences on Intelligent Transportation Systems, 2005, pp. 66-71.

[13] M. Betke, E. Haritaglu and L. Davis, “Multiple Vehicle Detection and Tracking in Hard Real Time,” IEEE Intelligent Vehicles Symposium, pp. 351–356, 1996.

[14] J. Ferryman, A. Worrall, G. Sullivan, and K. Baker, “A Generic Deformable Model for Vehicle Recognition,” Proceedings of British Machine Vision Conference, pp. 127–136, 1995.

[15] Z. Sun, G. Bebis, and R. Miller, “On-Road Vehicle Detection Using Gabor Filters and Support Vector Machines,” Proc. IEEE Int’l Conf’ Digital Signal Processing, July 2002.

[16] Ruck, D.W., S.K. Rogers, M. Kabrisky, M.E. Oxley and B.W. Suter, “The multilayer perceptron as an approximation to a Bayes optimal discriminant function,” IEEE Trans.

Neural Networks, 1990, 1(4), 296-298.

[17] Z. Sun, R. Miller, G. Bebis and D. Dimeo, “A Real-time Precrash Vehicle Detection System,” IEEE Intelligent Vehicles Symposium 2000. Dearborn, MI, USA.

[18] Chi-Chen Raxle Wang and Jenn-Jier James Lien “Automatic Vehicle Detection Using Local Features – A Statistical Approach” IEEE Transactions on Intelligent Transportation Systems, Vol. 9, No. 1, March 2008.

[19] Paul Viola, Michael J. Jones “Robust Real-Time Face Detection” International Journal of Computer Vision 57(2), 137–154, 2004 Kluwer Academic Publishers.

[20] C. Papageorgiou and T. Poggio, “A trainable system for object detection,” International Journal of Computer Vision, vol. 38, no. 1, pp. 15–33, 2000.

[21] Y. Freund and R. E. Schapire, “Experiments with a new boosting algorithm,” in Proceedings of the 13th International Conference on Machine Learning (ICML ’96), pp.

148–156, Bari, Italy, July 1996.

[22] Pablo Negri, Xavier Clady, Shehzad Muhammad Hanif, and Lionel Prevost “A Cascade of Boosted Generative and Discriminative Classifiers for Vehicle Detection” EURASIP Journal on Advances in Signal Processing Volume 2008, Article ID 782432, 12 pages.

[23] Chris Harris and Mike Stephens, "A Combined Corner And Edge Detector", Forth Alvey Vision Conference, Manchester, UK, pp147-151

[24] P. Viola, M. J. Jones, and D. Snow, “Detecting pedestrians using patterns of motion and appearance.” The 9th ICCV, Nice, France, volume 1, pages 734–741, 2003.

[25] C. Papageorgiou, M. Oren, and T. Poggio "A general framework for object detection,"

International Conference on Computer Vision, 1998.

[26] Rainer Lienhart and Jochen Maydt, "An Extended Set of Haar-like Features for Rapid Object Detection," IEEE ICIP 2002, Vol. 1, pp. 900-903, Sep. 2002.

[27] Luo-Wei Tsai, Jun-Wei Hsieh and Kuo-Chin Fan, "Vehicle Detection Using Normalized Color and Edge Map", IEEE Transactions on Image Processing, Vol. 16, No.3 MARCH, 2007

[28] Wen-Chung Chang and Chih-Wei Cho, "Online Boosting for Vehicle Detection", IEEE Transactions on Systems, Man, and Cybernetics-PART B: Cybernetics, Vol. 40, No.3 JUNE. 2010

[29] Ja-Fan Lee, "A Novel Vehicle Detection System Using Local and Global Features", NCTU, JULY, 2010

tracking", IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999., vol.2, no., pp.-252 Vol. 2, 1999.

[31] Peng S., Yanjiang W., “An improved adaptive background modeling algorithm based on Gaussian Mixture Model”, Signal Processing, 2008. ICSP 2008. 9th International Conference, pp. 1436-1439, 26-29 Oct. 2008.

相關文件