• 沒有找到結果。

Vacant Parking Space Detection Based on Plane-based Bayesian Hierarchical Framework

N/A
N/A
Protected

Academic year: 2021

Share "Vacant Parking Space Detection Based on Plane-based Bayesian Hierarchical Framework"

Copied!
6
0
0

加載中.... (立即查看全文)

全文

(1)

Vacant Parking Space Detection Based on

Plane-based Bayesian Hierarchical Framework

Ching-Chun Huang,

Member, IEEE,

Yu-Shu Tai, and Sheng-Jyh Wang,

Member, IEEE

Abstract—In this paper, we propose a vacant parking space

detection system that operates day and night. In the daytime, the major challenges of the system include dramatic lighting variations, shadow effect, inter-object occlusion, and perspective distortion. In the nighttime, the major challenges include in-sufficient illumination and complicated lighting conditions. To overcome these problems, we propose a plane-based method which adopts a structural 3-D parking lot model consisting of plentiful planar surfaces. The plane-based 3-D scene model plays a key part in handling inter-object occlusion and perspective distortion. On the other hand, to alleviate the interference of unpredictable lighting changes and shadows, we propose a plane-based classification process. Moreover, by introducing a Bayesian hierarchical framework to integrate the 3-D model with the plane-based classification process, we systematically infer the parking status. Last, to overcome the insufficient illumination in the nighttime, we also introduce a preprocessing step to enhance image quality. The experimental results show that the proposed framework can achieve robust detection of vacant parking spaces in both daytime and nighttime.

Index Terms—Bayesian inference, histogram of oriented

gra-dients, image classification, parking space detection.

I. Introduction

R

ECENTLY, video surveillance systems have become increasingly important in our daily life. With the no-ticeable progress of computer vision techniques, many video surveillance systems have been proposed to provide new kinds of intelligent functions, like object detection and tracking. Following the trend, vision-based systems for smart parking lot management have also attracted great attention in recent years. In general, these vision-based parking lot management systems can provide valuable information, like the location of vacant parking spaces, as well as some value-added services, like parking space guidance and vehicle finding. In this paper, we focus on a basic, yet crucial, function of vision-based

Manuscript received August 27, 2012; revised December 9, 2012, February 16, 2013; accepted February 18, 2013, February 20, 2013. Date of publication March 27, 2013; date of current version August 30, 2013. This work was supported in part by the National Science Council of Taiwan, under Grants 100-2218-E-151-007 and 101-2221-E-151-045. This paper was recommended by Associate Editor P. L. Correia.

C.-C. Huang is with the Department of Electrical Engineering, Na-tional Kaohsiung University of Applied Sciences, Kaohsiung 807, Taiwan, (e-mail:chingchun.huang5@gmail.com).

Y.-S. Tai and S.-J. Wang are with the Department of Electronics Engineer-ing, Institute of Electronics, National Chiao Tung University, Hsinchu 30010, Taiwan (e-mail: kinn 92@hotmail.com; shengjyh@faculty.nctu.edu.tw).

Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TCSVT.2013.2254961

Fig. 1. Challenges for robust vacant parking space detection, including perspective distortion, inter-object occlusion, shadow effect, variations of lighting condition, and insufficient illumination at night.

parking lot management systems automatic detection of vacant parking spaces.

In Fig. 1, we show several parking lot images in our dataset. To robustly detect vacant parking spaces, we have to deal with a few challenges, including dramatic lighting variations, shadows cast on the scene, varying perspective distortion in the image, and inter-object occlusion among parked cars and the ground plane. Besides, insufficient illumination during the nighttime is another challenge. To overcome these problems, many novel methods have been proposed in the past. These methods can be roughly categorized into four major types: car-oriented methods, space-car-oriented methods, hybrid methods, and parking-lot-oriented methods.

Car-oriented methods [1]–[5] target car detection, and they determine the status of parking spaces based on the detection result. Space-oriented methods [6]–[13] model the appearance of the ground plane in advance. If the current appearance of a parking space is dissimilar to that model, they identify the parking space as occupied. Some hybrid methods [14]–[16], on the other hand, combine both space detection and car detection to find vacant parking spaces. These hybrid methods focus on the design of the fusion mechanism to achieve improved performance. Recently, unlike car-oriented or space-oriented methods which focus only on certain aspects of parking lots, parking-lot-oriented methods [17], [18] have been proposed to model the whole parking lot in unity and to integrate the 3-D scene model with the image observation for parking status inference.

For car-oriented methods, Tsai et al. [1] propose a global color-based model to efficiently detect vehicle candidates. In

(2)

tain consistent texture features, such as histogram of oriented gradients (HOG) [19], to overcome lighting variations and geometric distortion. In general, these methods can achieve robust detection even under dramatic variations of lighting condition. However, for vacant space detection, these car-oriented methods do not take into account the inter-vehicle occlusion problem.

For space-oriented methods, the modeling of parking spaces is the key. Eigen-space representation [6] and many back-ground modeling algorithms [7]–[9] provide pixel-based meth-ods to provide ground models that can adapt to lighting vari-ations. However, these pixel-based space modeling methods are usually sensitive to the shadows cast over the ground. To relieve the shadow effect, some texture-based methods assume that a vacant parking space possesses homogeneous appear-ance. Hence, they design certain measure of homogeneity to detect vacant parking spaces. For example, Yamada et al. [10] design a homogeneity measure by calculating the area of fragmental segments; Lee et al. [11] suggest an entropy-based homogeneity metric; and Fabian [12] uses a segment-based homogeneity measure similar to that in [10]. However, due to perspective distortion, a distant parking space may only occupy a small region in the captured image. This usually leads to unstable homogeneity measurement. To overcome the perspective distortion problem, L´opez-Sastre et al. [13] suggest a method to rectify the perspective distortion and they use a Gabor filter bank to derive the homogeneity feature for vacant parking space detection. Basically, these space-oriented methods still suffer from the inter-object occlusion problem, which occurs when a parking space is partially or fully occluded by a car at an adjacent parking space.

Some researchers adopt hybrid methods to detect vacant parking spaces. For example, Dan [14] trains a general support vector machine (SVM) classifier to differentiate car regions from space regions by using image features made of the color vectors inside the parking space. However, this method cannot properly handle the inter-occlusion problem. To overcome the occlusion problem, Wu et al. [15] propose a method to group three neighboring spaces as a unit and they define the color histogram of the three-space unit as the feature in their SVM classifier. Even though these hybrid methods have considered both car model and space model, the classification perfor-mance of their algorithms is still affected by the environmental variations. In general, the lighting changes may dramatically degrade the detection accuracy. On the other hand, in [16], the authors propose an efficient method to combine static and dynamic information for vacant parking space detection. To extract static information, a histogram classification process is used to detect pavement regions while an edge counting process is used to identify vehicle regions. To extract dynamic information, they use blob analysis to track moving vehicles.

Type Method PD IO SE LV IIN

Car Tsai [1] X X   X Car Mejía-Iñigo [2] X X   X Car Masaki [3] X X   X Car Schneiderman [4]  X O O  Car Felzenszwalb[5]  X O O  Space Funck [6] X X X O X Space Background Modeling [7]–[9] X X X O X Space Yamada [10] X X   X Space Lee[11] X X   X Space Fabian [12] X X   X Space López-Sastre [13] O X   X Hybrid Dan [14]    X X Hybrid Wu [15]  O   X Hybrid Blumer [16]     X Parking-lot Huang [17] O O   X Parking-lot Huang [18] O O O  X Parking-lot Proposed Method O O O O 

In order to alleviate the inter-occlusion problem, however, their camera usually needs to be placed at a very high altitude.

Rather than focusing on the detection of individual cars or parking spaces, parking-lot-oriented methods model the geo-metric structure of the whole parking lot in order to properly handle the inter-occlusion situations. In [17]–[18], Huang et al. propose a Bayesian hierarchical framework (BHF) to integrate the 3-D scene knowledge and the classification of image pixels into a three-layer hierarchical framework. The structural scene properties of a parking lot, together with the pixel-based car model and parking space model, are well utilized to improve the performance of vacant space detection. Moreover, to conquer the variations of lighting condition and the shadows cast on the scene, Huang et al.’s method assumes that the parking lot scene is uniformly lighted by sunlight and they have made a lot of effort to dynamically estimate the lighting condition. However, although their method can produce robust detection results in the daytime, it fails in the nighttime due to the complicated lighting condition at night. Actually, so far as we know, very few systems have ever discussed the vacant parking space detection problem in the nighttime.

For the sake of clarification, we summarize in Table I the comparisons of several algorithms for vacant parking space detection. As indicated in this table, none of these existing methods can handle all the five types of challenges, including perspective distortion, inter-object occlusion, shadow effect, lighting variations, and insufficient illumination at night. In this paper, a new parking-lot-oriented method is presented to deal with all these challenges.

In the proposed method, we further improve the Bayesian hierarchical framework (BHF) in [18] to achieve robust detec-tion of vacant parking spaces in both daytime and nighttime. In our method, we model the whole parking lot as a 3-D

(3)

Fig. 2. System flow of the proposed algorithm.

structure consisting of plentiful planar surfaces. A plane-based classification process using robust texture features is proposed to replace the pixel-based classification in [18]. Furthermore, by using a modified BHF framework for inference, we can sys-tematically model the relation between the 3-D planar surfaces and their image appearance. The inter-vehicle occlusion is well modeled in the modified BHF framework and illumination-insensitive object textures are well used for robust parking space detection. Furthermore, by introducing a multi-exposure pre-process to enhance the captured image sequence, we can perform vacant parking space detection days and nights under a unified framework.

The rest of this paper is organized as follows. In Section II, we briefly introduce the overview of the proposed system. In Section III, we illustrate the preprocessing stage for generating high-dynamic-range images in the nighttime. In Section IV, we present the proposed plane-based BHF inference framework for vacant space detection. Experimental results and discus-sions are presented in Section V. Last, Section VI concludes this paper.

II. Overview of the Proposed Method In order to develop a vacant parking space detection system that can work all day, we focus on two major issues. The first issue is about how to obtain well exposed images for inference. In an outdoor scene, the lighting condition may have dramatic changes. Those variations greatly affect the appearance of image features, such as edges or colors, especially in the nighttime. To deal with this issue, we adopt a pre-process to enhance the visibility of image contents. On the other hand, the second issue is about how to improve the performance of vacant parking space detection and how to speed up the system for practical applications. To deal with this issue, a plane-based BHF framework is proposed for vacant parking space detection. By decomposing a parking lot into many 3-D planar surfaces, we can effectively exploit the texture information for vacant parking space detection and well represent the patterns of inter-vehicle occlusion.

In Fig. 2, we show the flowchart of the proposed method, which consists of a preprocessing step and a detection step. In the preprocessing step, we design a multi exposure system to

Fig. 3. (a) Image with a short exposure. (b) Image with a medium exposure. (c) Image with a long exposure. (d) Fusion result of the images in (a), (b), and (c).

capture images with different exposure settings. These images are then fused to obtain images with improved quality. In the detection step, a plane-based BHF inference framework is proposed. First, based on the proposed plane-based 3-D scene model, the normalized patches of interest, corresponding to the projection of 3-D surfaces onto the fused image, are identified. For each normalized patch, the histogram of oriented gradients (HOG) features are extracted and are further compressed via linear discriminant analysis (LDA) [20]. Finally, we use the proposed plane-based BHF framework to integrate 3-D scene information with plane-based classification results for the optimal inference of the status of the parking spaces. In the following sections, we will explain the details of the proposed system in steps.

III. Pre Processing Step

When capturing images in a dark environment, the color and texture information degrades. The degradation of image features may dramatically deteriorate the performance of va-cant parking space detection. Hence, a pre-processing stage is used in our system to enhance the quality of nighttime images. Up to now, plentiful methods have been proposed to enhance image contrast, like the Retinex-based algorithms in [21], [22], the histogram-equalization-based algorithms in [23]–[26], the Gray level grouping method in [27], [28], the discrete cosine transform DCT-based method in [29], the tone-mapping method in [30], and the Bayesian inference method in [31]. Although those methods can improve image quality impressively, some side-effects, like noise amplification and halo effects, may generate extra image features and harm the following detection process. Different from those approaches which are based on a single image, we enhance nighttime images based on multiple images under different exposure settings. In a dark environment, some image features, like colors or edges, may be missing if the exposure time is too short, as shown in Fig. 3(a). On the contrary, image color or intensity may get saturated if the exposure time is too long,

(4)

Fig. 4. Illustration of multi-exposure image capturing.

as shown in Fig. 3(c). The choice of exposure time is usually a trade-off. With the use of multiple images under different exposure settings, we are able to extract useful image features in both dark and bright areas. By fusing these images into a single image, we can obtain an image with improved details, as shown in Fig. 3(d).

To get multi exposure images, we use the AXIS M1114 IP camera which can adjust the exposure value (EV) during image capturing. By using the software development kit (SDK) provided by AXIS, we capture images from a short exposure period to a long exposure period in a cyclic manner with the period of N image frames, as illustrated in Fig. 4. In our system, the longest and shortest exposure period is 3 s and 0.33 s, respectively. By using the two-step exposure fusion method proposed in [32] to combine every N images, we get images of improved contrast.

IV. Detection Step A. Plane-Based Structure and Feature Extraction

In our system, we attempt to find a way to benefit from both car-oriented and space-oriented approaches. For car-oriented methods, they usually check a car area like that in Fig. 5(a); while for space-oriented methods, they check the ground area like that in Fig. 5(b). In our approach, we treat the parking spaces as a set of cuboids, as illustrated in Fig. 5(c). Each cuboid is composed of six patches, as illustrated in Fig. 5(d). Based on the 3-D cuboid model, we represent the structure of parking by a set of 3-D planar surfaces, as shown in Fig. 5(e). By projecting those 3-D surfaces onto the image, we get image patches of parallelogram shape. These patches are to be used for the status inference of parking spaces.

Due to the perspective projection in image formation, image patches may appear to be quite different in shape and size. To overcome perspective distortion, we normalize each image patch into a rectangle, with Rl pixels in length and Rw pixels in width. After that, we extract features from the normalized patches. For feature extraction, we adopt the HOG feature proposed in [19], which is less affected by shadows and the changes of illumination. To extract HOG features, a normalized image patch is regularly segmented into

Fig. 5. (a) Image region for car-based inference. (b) Image region for space-based inference. (c) Cuboid modeling of parking spaces. (d) Planar surfaces in the cuboid model. (e) Parking lot model composed of planar surfaces.

Fig. 6. Patch normalization and HOG feature extraction.

non overlapping cells, with each cell containing Cl×Cw pix-els. In total, there would be (Rl/Cl)·(Rw/Cw) cells in each normalized patch. For each cell, a histogram of oriented gradients, as defined in [19], is built. Each histogram has Hb histogram bins. By combining the histograms of all cells in the normalized patch, we obtain the HOG feature. In our system, the parameters (Rl×Rw, Cl×Cw, Hb) are empirically chosen to be (64×32, 16×16, 10). That is, each normalized patch contains eight cells and the dimensions of its HOG feature is 80. In Fig. 6, we illustrate the processes of patch normalization and HOG feature extraction. As will be explained later, these high-dimensional HOG features will be converted into low-dimensional features via LDA so that the following inference process can be implemented in a more efficient way.

B. Patch Classification

In this section, we explain how to perform patch classifi-cation in the proposed plane-based BHF framework. As men-tioned before, in our plane-based model, each parking space is approximated as a cuboid with six 3-D planar surfaces. We classify these surfaces into four different types: ground surface (G), side surface (S), front (or rear) surface (F), and top surface (T). Via perspective projection, these four types of planar surfaces are projected onto four types of image patches:

(5)

Fig. 7. Illustration for the subclass definitions of image patch.

G-patch, S-patch, F-patch, and T-patch. Due to the inter-object occlusion in the 3-D space, we further classify each type of image patch into a few sub-classes. In Fig. 7, we illustrate how we define the sub-classes for each kind of image surface. Here, without loss of generality, we use a camera configuration with a 45-degree view to explain the proposed patch classification. For the G-patch of the parking space “c” in Fig. 7, its image content is affected not only by the status of the parking space “c” but also by the status of the adjacent parking space “b.” Depending on whether these two parking spaces are occupied or vacant, there are four types of image patterns according to four different parking statuses: 1) “c” is occupied while “b” is vacant; 2) “c” is vacant while “b” is occupied; 3) both “c” and “b” are occupied; and 4) both “c” and “b” are vacant. Similarly, for the S-patch shared by “b” and “c” or the F-patch shared by “a” and “b,” there would be four major kinds of image patterns. For the T-patch of the parking space “b,” on the other hand, we may either classify its image patterns into four sub-classes that relate to the four status combinations of the spaces “b” and “c,” or into eight sub-classes that relate to the eight status combinations of “a,” “b,” and “c.” In our experiments, for the sake of simplification, we choose the four-subclass classification for T-patches

In Table II, based on the illustration in Fig. 7, we further define the indices of the four subclasses for each surface type according to the four status combinations of the present parking space and the most influential adjacent parking space. In this table, “o” means “occupied,” “v” means “vacant,” and “X” means “do not care.” In total, there are 16 kinds of image patches related to the four different types of planar surfaces and the four different combinations of parking statuses for each surface type. In the following paragraphs, we will use the notation TypeIndex, where Type∈ {T, G, S, F} and Index

∈ {1, 2, 3, 4}, to label these 16 kinds of image patches. The whole set of these 16 patch labels is denoted as L{T−1, T−2, T−3, T−4, G−1, G−2, G−3, G−4, S−1, S−2, S−3, S−4,

F−1, F−2, F−3, F−4}.

In Fig. 8, we illustrate the 16 kinds of image patches for a parking space, together with some patch samples. Note that the front surface and the rear surface belong to the same surface type. Similarly, the surfaces on the two sides of the parking space belong to the same surface type. It can be observed in these samples that the image content inside a patch may reveal not only the information of the current parking space but also the information of the adjacent parking space. Moreover, for each surface type, the image contents for different combinations of parking statuses appear to be quite

Fig. 8. Sixteen kinds of patch patterns and their classification labels. Each patch pattern is indicated by the rectangular region.

different. Hence, it would be possible for us to classify a given image patch into one of the four subclasses simply based on its image content. The classification result provides evidence to support not only the status inference of the current parking space but also the inference of the adjacent parking space. Even though the classification result at a single image patch may not be always correct, we can combine the classification results of several image patches around a parking space to achieve more robust inference.

Given a parking lot, we first set up an IP camera on the roof of a building near the parking lot. The camera is geometrically calibrated to obtain the 3-D to 2-D projection model and to construct the 3-D plane-based scene model. After that, we capture a few image sequences of the monitored parking lot and extract plentiful image patches for each type of planar surface. For each image patch, we manually collect its patch label and extract its HOG feature from the normalized patch. Based on the labeled patch type and the HOG feature, we learn the conditional probability function p(o|l), where o denotes the observed feature of an image patch and l∈ L denotes the label of the image patch.

Since the surface type of an image patch can always be determined based on the 3-D scene model and the 3-D to 2-D transformation, we simply construct the conditional probability model for each of the four surface types. Before classification, we apply the multi-class LDA over the training image patches of each surface type to reduce the high-dimensional HOG features down to a much lower dimension. Taking the learning process of the surface type T as an example, each image

(6)

tems,” in Proc. IEEE Int. Conf. Intell. Transp. Syst., vol. 13, no. 3. Dec. 1998, pp. 24–31.

[4] H. Schneiderman and T. Kanade, “Object detection using the statistics of parts,” Int. J. Comput. Vision, vol. 56, no. 3, pp. 151–177, Feb. 2004. [5] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part based models,” IEEE Trans.

Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1627–1645, Sep. 2010.

[6] S. Funck, N. Mohler, and W. Oertel, “Determining car-park occupancy from single images,” in Proc. IEEE Intell. Veh. Symp., Jun. 2004, pp. 325–328.

[7] T. Horparasert, D. Harwood, and L. A. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” in

Proc. IEEE Int. Conf. Comput. Vision, Sep. 1999, pp. 1–19.

[8] C. Stauffer and W. E. L Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Int. Conf. Comput. Vision Pattern

Recognit., vol. 2, Jun. 1999, pp. 246–252.

[9] P. Power and J. Schoonees, “Understanding background mixture model for foreground segmentation,” in Proc. Image Vision Comput., New Zealand, Nov. 2002, pp. 267–271.

[10] K. Yamada and M. Mizuno, “A vehicle parking detection method using image segmentation,” Electron. Commun., vol. 84, no. 10, pp. 25–34, Oct. 2001.

[11] C. H. Lee, M. G. Wen, C. C. Han, and D. C. Kuo, “An automatic monitoring approach for unsupervised parking lots in outdoor,” in Proc.

IEEE Int. Conf. Security Technol., Spain, Oct. 2005, pp. 271–274.

[12] T. Fabian, “An algorithm for parking lot occupation detection,” IEEE

Comput. Inf. Syst. Ind. Manage. Appl., pp. 165–170, Jun. 2008.

[13] R. J. L. Sastre, P. Gil Jimenez, F. J. Acevedo, and S. Maldonado Bascon, “Computer algebra algorithms applied to computer vision in a parking management system,” in Proc. IEEE Int. Symp. Ind. Electron., Jun. 2007, pp. 1675–1680.

[14] N. Dan, “Parking management system and method,” U.S. patent, Pub. No.: 20030164890A1, Jul. 2003.

[15] Q. Wu, C. C. Huang, S. Y. Wang, W. C. Chiu, and T. H. Chen, “Robust parking space detection considering inter-space correlation,” in Proc.

IEEE Int. Conf. Multimedia Expo, Jul. 2007, pp. 659–662.

[16] K. Blumer, H. Halaseh, M. Ahsan, H. Dong, and N. Mavridis, “Cost-effective single-camera multi-car parking monitoring and vacancy de-tection toward real-world parking statistics and real-time reporting,” in

Proc. Int. Conf. Neural Inf. Process., Nov. 2012, pp. 506–515.

[17] C. C. Huang, S. J. Wang, Y. J. Chang, and T. Chen, “A Bayesian hierarchical detection framework for parking space detection,” in Proc.

IEEE Int. Conf. Acoust. Speech Signal Process., Apr. 2008, pp. 2097–

2100.

[18] C. C. Huang and S. J. Wang, “A hierarchical bayesian generation framework for vacant parking space detection,” IEEE Trans. Circuits

Syst. Video Technol., vol. 20, no. 12, pp. 1770–1785, Dec. 2010.

[19] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., vol. 1, Jun. 2005, pp. 886–893.

[20] A. M. Mart´ınez and A. C. Kak, “PCA versus LDA,” IEEE Trans. Pattern

Anal. Mach. Intell., vol. 23, no. 2, pp. 228–233, Feb. 2001.

[21] J. Wu, Z. Wang, and Z. Fang, “Application of retinex in color restoration of image enhancement to night image,” in Proc. Int. Cong. Image Signal

Process., Oct. 2009, pp. 1–4.

[22] A. Yamasaki, H. Takauji, S. Kaneko, T. Kanade, and H. Ohki, “Denight-ing: Enhancement of nighttime images for a surveillance camera,” in

Proc. Int. Conf. Pattern Recognit., Dec. 2008, pp. 1–4.

[23] H. Ibrahim and Nicholas Sia Pik Kong, “Brightness preserving dynamic histogram equalization for image contrast enhancement,” IEEE Trans.

Consum. Electron., vol. 53, no. 4, pp. 1752–1758, Nov. 2007.

Trans. Image Process., vol. 18, no. 9, pp. 1921–1935, Sep. 2009.

[27] Z. Y. Chen, B. R. Abidi, D. L. Page, and M. A. Abidi, “Gray level grouping (GLG): An automatic method for optimized image contrast enhancement–Part 1: The basic method,” IEEE Trans. Image Process., vol. 15, no. 8, pp. 2290–2302, Aug. 2006.

[28] Z. Y. Chen, B. R. Abidi, D. L. Page, and M. A. Abidi, “Gray level grouping (GLG): An automatic method for optimized image contrast enhancement–Part 2: The variations,” IEEE Trans. Image Process., vol. 15, no. 8, pp. 2303–2314, Aug. 2006.

[29] J. Mukherjee and S. K. Mitra, “Enhancement of color images by scaling the DCT coefficients,” IEEE Trans. Image Process., vol. 17, no. 10, pp. 1783–1794, Oct. 2008.

[30] Q. Shan, J. Jia, and M. S. Brown, “Globally optimized linear windowed tone-mapping,” IEEE Trans. Vis. Comput. Graphics, vol. 16, no. 4, pp. 663–675, July–Aug. 2010.

[31] T.-C. Jen and S.-J. Wang, “An efficient Bayesian framework for image enhancement with spatial consideration,” in Proc. IEEE Int. Conf. Image

Process., Sep. 2010, pp. 3285–3288.

[32] I. Mertens, J. Kautz, and F. V. Reeth, “Exposure fusion,” in Proc. 15th

Pacific Conf. Comput. Graphics Appl., Mar. 2007, pp. 382–390.

[33] C.-C. Huang (2012), Huang’s Projects [Online]. Available: http://140. 113.238.220/∼chingchun/Lotprojects.html

Ching-Chun Huang (M’09) received the B.S., M.S., and Ph.D. degrees in electrical engineering from National Chiao Tung University, Hsinchu, Tai-wan, in 2000, 2002, and 2010, respectively.

He is currently an Assistant Professor with the De-partment of Electrical Engineering, National Kaoh-siung University of Applied Sciences, Taiwan. His current research interests include image/video pro-cessing, computer vision, and computational photog-raphy.

Yu-Shu Tai received the B.S. degree in

engineer-ing science from National Cheng Kung University, Tainan, Taiwan, in 2009 and the M.S. degree in electronics engineering from National Chiao-Tung University, Hsinchu, Taiwan, in 2011.

His expertise area is in image processing and computer vision.

Sheng-Jyh Wang (M’95) received the B.S. degree

in electronics engineering from National Chiao-Tung University (NCTU), Hsinchu, Taiwan, in 1984, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, USA, in 1990 and 1995, respectively.

He is currently a Professor with the Department of Electronics Engineering, NCTU. His current re-search interests include image processing, video processing, and image analysis.

數據

Fig. 1. Challenges for robust vacant parking space detection, including perspective distortion, inter-object occlusion, shadow effect, variations of lighting condition, and insufficient illumination at night.
Fig. 2. System flow of the proposed algorithm.
Fig. 5. (a) Image region for car-based inference. (b) Image region for space- space-based inference
Fig. 7. Illustration for the subclass definitions of image patch.

參考文獻

相關文件

The question of whether every Cauchy sequence in a given inner product space must converge is very important, just as it is in a metric or normed space.. We give the following name

Mean Value Theorem to F and G, we need to be sure the hypotheses of that result hold.... In order to apply

In this paper, we propose a practical numerical method based on the LSM and the truncated SVD to reconstruct the support of the inhomogeneity in the acoustic equation with

In this paper, we build a new class of neural networks based on the smoothing method for NCP introduced by Haddou and Maheux [18] using some family F of smoothing functions.

Corollary 13.3. For, if C is simple and lies in D, the function f is analytic at each point interior to and on C; so we apply the Cauchy-Goursat theorem directly. On the other hand,

Corollary 13.3. For, if C is simple and lies in D, the function f is analytic at each point interior to and on C; so we apply the Cauchy-Goursat theorem directly. On the other hand,

Light rays start from pixels B(s, t) in the background image, interact with the foreground object and finally reach pixel C(x, y) in the recorded image plane. The goal of environment

The development of IPv6 was to a large extent motivated by the concern that we are running out of 4-byte IPv4 address space.. And indeed we are: projections in- dicate that the