• 沒有找到結果。

Chapter 3. Proposed Method

3.4. Implementation Details

In our detection procedure, once we get an HOG feature from an image patch, we compute the probabilistic value of this feature with respect to every possible surface pattern. In fact, we can save the computations by using the 3D scene information in advance. For an example, if we want compute how likely the HOG feature belonging to the top surface, we only need to consider the cases of “T_1” and

“T_2”. This would take fewer computations on the detection process.

Moreover, in the detection step, as we make our decision for a parking block, we have to consider all possible parking hypotheses of the parking block. Unfortunately, the number of parking hypotheses grows exponentially with respect to the number of parking spaces in the parking block. In our experiments, there are as many as twenty-six parking spaces in a parking block. This leads to about seventy million hypotheses! Hence, in our work, we further divide a parking block into two rows and do the detection for the front row first. With the detection result of the first row, we detect the parking statuses of the second row afterward. Figure 3-33 illustrates this approach. Although this approach may sometimes generate detection errors, it saves a lot of computation time without sacrificing too much accuracy in detection.

Figure 3-33 Row-wise detection procedure

If the size of the parking lot is too big and there are still too many parking spaces in a single parking row, we can further reduce the computation time by considering only a few surfaces that are neighboring to the present parking space for inference. It would take much less time in detection and does not deteriorate the overall performance too much. Some experimental results of this simplified approach will be shown in Chapter 4.

Moreover, although we have considered the fluctuations of surface patterns caused by the variations of car brands, car types, or the parked position within the parking space, there are some other situations that complicate the problem. Figure 3-34 shows two cases that the cars park too inside or too outside of the parking space.

This may lead to incorrect inference. To deal with this problem, we sample the HOG feature at three different positions within the parking space and take the largest probabilistic value for inference. An example is shown in Figure 3-35, which samples the front surface at three different locations. With this approach, we can handle the dramatic fluctuation of parking position effectively without affecting the detection result of the other parking spaces.

(a) (b)

Figure3-34 (a) A car parked too inside of the parking space (b) A car parked too outside of the parking space

Figure3-35 Three sampling positions of feature extraction for the front surface

Chapter 4.

E XPERIMENTAL R ESULTS

In this chapter, we will show some experiment results of our proposed method.

Before the introduction of our experiments, we mention some specifications of the parking lot in our experiments. In this parking lot, there are three major blocks and there are seventy-two parking spaces in total. Figure 4-1 shows an image of the parking lot and the detection regions of the three parking blocks.

Figure 4-1 (a) An image of the parking lot. (b) The corresponding detection regions.

To evaluate the performance of our system, we calculate positive rate (FPR), false negative rate (FNR), and accuracy (ACC), which are defined as below.

number of parked spaces being detect as vacant

FPR= total number of parked spaces Eq.4-1

number of vacant spaces being detect as parked

FNR= total number of vacant spaces

Eq.4-2

number of correct detection in both parked and vacant spaces

ACC= total number of test spaces Eq.4-3

In our experiment, we divide the whole day into two periods: day period (5:00~19:00) and nigh period (19:00~5:00). We will demonstrate the performance of each period separately. Since there is almost no change of parking status during the early morning period (0:00~5:00), we do not take that period into account. Moreover, the multi-exposure preprocessing is only used during the night period and there is no need to use the multi-exposure preprocessing during the day period.

Table 4-1 shows our experimental results for the day period which includes the performance of each parking block and the performance in a normal day, a sunny day, a cloudy day, and a rainy day, respectively. In Figure 4-2, we show image samples that are captured in these four different types of weather.

Table 4-1 Day period performance of the proposed method

(a) (b)

(c) (d)

Figure 4-2 Image samples captured in (a) normal day, (b)sunny day, (c) cloudy day, and (d)rainy day

#of tested spaces Proposed method

As aforementioned, there are four major problems in daytime vision-based parking space detection: occlusion effect, shadow effect, perspective distortion, and fluctuation of lighting condition. The following experiments will demonstrate that our method can deal with these problems effectively. In Figure 4-3, we show the detection result under different kinds of shadowing. With the use of the modified HOG feature, our method can deal with the shadow problem effectively.

(a) (b)

(c) (d)

(e) (f)

Figure 4-3 (a)(c)(e): Images with shadow effect (b)(d)(f): The corresponding detection result

Besides shadow effect, our proposed method can also deal with the fluctuation of lighting condition. Some examples are shown in Figure 4-4. Although there are dramatic changes of colors in these images, our algorithm still works very well. This is because we have used the gradient information in our system which is less affected

by the change of illumination.

(a) (b)

(c) (d)

(e) (f)

Figure 4-4 (a)(c)(e): Images with different lighting condition (b)(d)(f): The corresponding detection results

Moreover, for the comparison with other algorithms, we test Dan’ method in [10], Wu’s method in [11], Huang’s method in [6], and Huang’s method in [15] over the dataset released by Huang in [15]. For this dataset, there are five rows of parking spaces in the parking lot and there are forty-six parking spaces in total. Here, Row 1 denotes the bottom row and Row 5 denotes the top row. Seq_1 is an image sequence captured in a normal day, Seq_2 is an image sequence captured in a sunny day, and Seq_3 is an image sequence captured in a cloudy day. Figure 4-5 shows some examples of these three image sequences and Table 4-2 lists the experimental results.

If compared to the performance of Huang’s work in [15] which achieves the best performance among all algorithms, our method achieves comparable performance in the day period. However, Huang’s method is much more complicated and needs some extra information such as the angle of the sunlight direction. Besides, our system outperforms Huang’s in the cloudy day case. This is because Huang’s method relies heavily on the color information. This makes his system less accurate in a cloudy day.

Table 4-2 Comparisons of day period performance

Huang [6] Huang [15] Proposed method

FPR FNR ACC FPR FNR ACC FPR FNR ACC Normal day (b) Sunny day (c) Cloudy day

For the night period, the multi-exposure pre-processing is included in our detection procedure. In addition, before the fusion stage, we apply a median filter to the multi-exposure images with the 33 mask size for the EV=10 setting, the 55 mask size for the EV=50 setting, and the 77 mask size for the EV=90 setting. This filtering process can greatly reduce the image noise that occurs when capturing images in a dark environment. However, the head light of moving cars in the parking lot and the lamps in the parking lot may also cause dramatic fluctuation of lighting condition in the parking lot. Even though these conditions are not specially considered in the design of our system, our system can still roughly handle these cases with the help of the modified HOG feature. In Table 4-3, we demonstrate the performance of our system over the night period when there is no moving car in the parking lot. The comparison with Wu’s method [11] is also shown in Table 4-3. Some images and their detection results of these night period sequences are shown in Figure 4-6.

(a) (b)

(c) (d)

(e) (f)

Figure 4-6 (a)(c)(e): Different fusion images at light (b)(d)(f): The corresponding detection results

Table 4-3 Night period performance of the proposed method

#of tested spaces Wu [11] Proposed method

vacant parked total FPR FNR ACC FPR FNR ACC

Seq_1 4592 3112 7704 0.1520 0.1021 0.8777 0.0135 0.0294 0.9770

Seq_2 4659 2631 7200 0.1281 0.0989 0.8904 0.0049 0.0282 0.9803

Seq_3 4540 2588 7128 0.1565 0.1260 0.8629 0.0232 0.0172 0.9806

Block_1 4480 3476 7956 0.0630 0.1313 0.8986 0.0147 0.0221 0.9811 Block_2 5051 2905 7956 0.1535 0.1026 0.8788 0.0048 0.0123 0.9904 Block_3 4170 1950 6120 0.2821 0.0928 0.8469 0.0256 0.0434 0.9623

(a)

(b)

Figure 4-7 (a)(b) Night period detection with moving cars in the scene Row 1: image samples with EV=10, EV=50, and EV=90 Row 2: Fused images and the corresponding detection results

Considering the case when there are moving cars in the park lot, the change of illumination caused by the headlight of cars may dramatically affect our decision in vacant parking space detection. Moreover, by using the preprocessing of exposure fusion, the fused image may contain motion blurs caused by the moving cars. Figure 4-7 shows two examples with moving cars in the scene and their corresponding detection results.

In Table 4-4 and Table 4-5, we compare the performance between the detection results by estimating the status of the whole row at one time and the detection results by using only the neighboring surfaces of the tested parking space. Table 4-4 and respect to Table 4-1 and Table 4-3, Table 4-6 and Table 4-7 list two experimental results over different training sets. We can see that the performance using the whole-day dataset is quite similar to the performance using only day-period or night-period training dataset.

Table 4-4 Performance between two different approaches in day period

#of tested spaces Row-wise Test Local region Test

vacant parked total FPR FNR ACC FPR FNR ACC

Table 4-5 Performance between two different approaches in night period

#of tested spaces Each parking row Local region

vacant parked total FPR FNR ACC FPR FNR ACC

Table 4-6 Day-period performance over different training sets

#of tested spaces Training (daytime only) Training (whole day)

vacant parked total FPR FNR ACC FPR FNR ACC

Table 4-7 Night-period performance over different training sets

#of tested spaces Training (nighttime only) Training (whole day)

vacant parked total FPR FNR ACC FPR FNR ACC

Chapter 5.

C ONCLUSIONS

In this thesis, we propose a vision based vacant space detection framework for an all-day parking lot management system. Hoping to perform vacant parking space detection days and nights, we purpose two main ideas. First, we use a preprocessing procedure to recover the information lost in dark images. Second, we treat the whole parking lot as a surface-like structure with 3D scene information. Based on this surface-like structure, we model our parking space in terms of a Bayesian Hierarchical Framework (BHF). In this framework, we consider simultaneously the prior information from the 3D scene layer and the data passed from the observation layer. This makes our decision more stable and accurate. The experimental results have demonstrated that our proposed method can successfully solve shadow effect, perspective distortion, occlusion effect, and fluctuation of lighting condition and can work reliably days and nights. In addition, our framework is quite flexible and can be easily modified to fit for various kinds of applications.

R EFERENCES

[1] H. Schneiderman, and T. Kanade, “Object Detection Using the Statistics of Parts,” International Journal of Computer Vision, vol. 56, no. 3, pp. 151-177, Feb 2004.

[2] P. Viola, and M. J. Jones, “Robust Real-Time Face Detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, May 2004.

[3] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,”

IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp.

886-893, June 2005.

[4] L.-W. Tsai, J.-W. Hsieh, and K.-C. Fan, “Vehicle detection using normalized color and edge map,” IEEE Transaction on Image Processing, vol. 16, no. 3, pp.

850–864, Mar 2007.

[5] S. Funck, N. Mohler and W. Oertel, “Determining car-park occupancy from single images”, IEEE Intelligent Vehicles Symposium, pp.

325-328, June 2004.

[6] C. C. Huang, S. J. Wang, Y. J. Chang, and T. Chen, “A Bayesian Hierarchical Detection Framework for Parking Space Detection,” IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2097-2100, 2008.

[7] K. Yamada and M. Mizuno, “A vehicle parking detection method using image segmentation”, Electronics and Communications in Japan, vol.

84, no. 10, pp. 25-34, 2001.

[8] T. Fabian, “An Algorithm for Parking Lot Occupation Detection,” IEEE Computer Information Systems and Industrial Management Applications, pp.

165–170, June 2008.

[9] R. J. Lopez Sastre, P. G. Jimnez, F. J. Acevedo, and S. Maldonado Bascn,

“Computer Algebra Algorithms Applied to Computer Vision in a Parking Management System,” IEEE International Symposium on Industrial Electronics, pp. 1675-1680, June 2007.

[10] Noah Dan, “Parking Management System and Method,” US Patent 20030144890A1, Jul 2003.

[11] Q. Wu, C. C. Huang, S. Y. Wang, W. C. Chiu, and T. H. Chen, “Robust Parking Space Detection Considering Inter-Space Correlation”, IEEE International Conference on Multimedia and Expo, pp. 659-662, July 2007.

[12] Paul E. Debevec and Jitendra Malik, "Recovering High Dynamic Range Radiance Maps from Photographs," in SIGGRAPH 97, pp. 369–378, August 1997.

[13] I. Mertens, J. Kautz, and F. V. Reeth, “Exposure fusion,” The 15th Pacific Conference on Computer Graphics and Applications, pp. 382-390, 2007.

[14] Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester and Deva Ramanan,

“Object Detection with Discriminatively Trained Part Based Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, Sept 2010.

[15] C. C. Huang, S. J. Wang, “A Hierarchical Bayesian Generation Framework for Vacant Parking Space Detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 12, pp.1770-1785, Dec 2010.

[16] Jing Wu, Ziwu Wang and Zhixia Fang, “Application of Retinex in Color Restoration of Image Enhancement to Night Image,” International Congress on Image and Signal Processing, Oct 2009.

[17] Yamasaki, A., Hidenori Takauji, Kaneko, S., Kanade, T. and Ohki, H.

“Denighting: Enhancement of nighttime images for a surveillance camera,”

International Conference on Pattern Recognition, Dec 2008.

[18] Youdong Zhao, Haifeng Gong, Liang Lin and Yunde Jia, “Spatio-temporal patches for night background modeling by subspace learning,” International Conference on Pattern Recognition, Dec 2008.

相關文件