• 沒有找到結果。

Chapter 1 Introduction

4.3 Vehicle Detection Based on Contour Size Similarity

4.4.1 Vehicle Detection Results

The proposed approach of vehicle detection is applied to situations with regular illumination or strong sunshine, and roads with text as shown in Figs 4-5, 4-6 and 4-7. Figure 4-5 reveals a successful detection of the closest preceding vehicle on the lane of the autonomous vehicle. Even though the bottom part of the vehicle contour is not a straight horizontal line, its shadow below still forms a horizontal line in the image. So the vehicle and its shadow still compose a quasi-rectangular contour in the image.

Fig. 4-5. Vehicle detection with regular illumination.

Figure 4-6 is a road with some patterns, such as lane markings, text markings and crossing line. Patterns on the road do not affect our vehicle detection, because they can not form any vertical edges in the images.

Fig. 4-6. Vehicle detection with patterns on the road.

Fig. 4-7. Vehicle detection under sunny conditions.

Fig. 4-8. The results of vehicle detection with vehicles cutting in the lane of the autonomous vehicle.

Figure 4-7 displays the experimental results under sunny conditions. Although the reflection of light generated noise, the proposed algorithm still recognized the target vehicle

efficiently in this adverse condition.

Figure 4-8 exhibits detection results of consecutive images. At the bottom of every image is a number showing the distance between the camera and the preceding vehicle computed by (4.1). In frame 518, the range to the closest preceding car was 41.5 meters. In frame 581, the car in the right lane cut in, so the detected distance changed to 14.7 meters. Likewise, the detected range to the preceding vehicle was 32 meters in frame 1182, and became 25.2 meters in frame 1233 when a car cut in.

Figure 4-9 shows results of lane and vehicle detections in the freeway. The distance between the vehicle and the camera is estimated to be 35.5 m.

Fig. 4-9 Results of lane and vehicle detection.

4.4.2 Comparative Analysis

The experimental results were compared with other systems in terms of lane and vehicle detection in Table 4-1 [29][66]. As it can be observed, GOLD [29] adopted a stereo camera,

and the cost is higher than the other two. In computation cost, GOLD could effectively detect lane markings by IPM and black-white-black transitions on the flat roads. Although IPM may require plenty of time for computation, the application of a pre-computed table helps rapidly create top-view images. However, with various road conditions, the vibration of the camera may cause extra mapping distortion and errors. Besides, vehicle detection of GOLD requires a comparison of disparity between two cameras so more time is needed. Sun. et al [66] applied the Gabor filter and SVM (support vector machines) to detect vehicles. These approaches are time-consuming because the algorithms involve high computation complexity. The proposed approach in this study conducts lane detection first and then defines the current lane region as ROI for vehicle detection to achieve real time lane and vehicle detection and reduce errors.

Besides, in our vehicle detection, a Kalman filter is designed to process the estimated range between the preceding vehicle and the camera, and to enhance the robustness of range estimation.

Table4-1 Comparison of approaches in lane and vehicle detection Lane detection Vehicle detection

Chapter 5

Conclusion and Future Works

5.1 Range Estimation and Dynamic Calibration

In this study, we have presented several approaches for the estimation of the range between the preceding vehicle and the camera, range errors, the actual height of vehicles and the projective height of the detected vehicles in various positions. The results of error estimation can be adopted as a reference to determine the preset camera parameters, suppress estimation errors and facilitate rapid and accurate estimation of vehicle sizes.

According to the error analyses, the variations of camera tilt and swing angles lead to significant errors in range estimation results. A dynamic calibration approach has been proposed to effectively reduce errors of range estimation. A Kalman filter is also integrated in order to more stably estimate swing angles so that the estimation results can be sufficiently robust and estimation errors can be further reduced. Experimental results demonstrate that our approaches can provide accurate and robust estimations of range and size of target vehicles.

The proposed approaches can serve as reference for designers of vision-based driving assistance systems to improve the efficiency of vehicle detection and range estimation.

5.2 Lane detection

To apply lane detection for the guidance of autonomous vehicles and driving assistance system, a variety of road conditions should be considered, such as changes of illumination, a great diversity of road curvature, difference in the configurations of lane markings like

continuous, dashed or occluded road markings. A lane detection system should have high efficiency, robustness, and reliability to make driving at high speed safe.

This study has proposed a rapid computation of lane width to predict the projective positions and widths of lane markings and an approach LME FSM is designed to extract lane markings efficiently. A statistical search algorithm is also proposed to correctly and adaptively determine thresholds under various kinds of illumination conditions. Moreover, a dynamic calibration algorithm is applied to update the information of a camera’s parameters and lane width. Additionally, a method of fuzzy reasoning is adopted to determine whether the lane marking is continuous, dashed and occluded. Finally, the strategy of the ROI is proposed to narrow the search region and make the detection more robust. Experimental results shows that even when obstacles occlude parts of the lane markings or lane markings have complicated curvature, road boundaries still can be reconstructed correctly by B-spline with four segments. In conclusion, even with the information of lanes, there are still many threats from surrounding vehicles and obstacles when driving. Thus, the function of obstacle detection should be combined with lane detection systems to make the guidance of autonomous vehicles and driving assistance systems better in the future.

5.3 Vehicle detection

A real-time obstacle detection system to detect obstacles and vehicles whose shapes are similar to rectangles is presented. When detecting, we start with edge detection, and then identify obstacles and recognize whether they are vehicles according to their contour sizes in the vertical and horizontal edges. Many obstacles can be found in the detection and their distance to the camera can be acquired. With the information of lane detection, the closest preceding car in the lane of the autonomous vehicle can successfully be detected in real time.

This work can be applied in vehicle detector for the driving assistance system.

5.4 Future Works

Some directions for future study are recommended below:

(1) In the future, simultaneous detections of several lanes and vehicles will be conducted and the obtained information will be applied to the throttle and brake systems of vehicles to support the automatic driving of intelligent autonomous vehicles.

(2) The changeable illumination and weather condition of outdoor surroundings and the high speed of vehicle movements increase the difficulty of lane and vehicle detection.

More robust and rapid approaches should be proposed to make the driving assistance systems real-time and adaptive.

(3) Besides vehicles, pedestrians, motorcycles and other obstacles also can affect driving safety. Therefore, techniques for detecting those objects should be developed to increase the feasibility of the driving assistance system and improve driving safety.

A

PPENDIX

A

Relation of Projected Width and v-coordinate

If the lane width is WWL and the projective lane width on the v-coordinate is wL (v), then (A-1) can be obtained from (1) and (2). (A-2) means the first derivative for v to wL (v). Let ξ = (π/2- α), and τ = tan-1(v/λ). Then (A-4) and (A-5) can be derived from (A-2) and (A-3). Since the camera was placed in a vehicle to detect the lane, when α is large, the farther part of the lane would not appear in the image. Therefore, α is usually between 0-6 degrees. In the study, let the tilt angle α<10°, and then the value of ξ will be larger than 80°, and they are substituted in (A-6)(A-7). Next, they are applied to (A-5) to obtain (A-8) and (A-9). (A-9) shows the first derivative of wL (v) is a constant. The relation between wL (v) and v can be expressed by a linear equation as (A-10).

( ) ( ) ( ) ( )

A

PPENDIX

B

Adaptation to Illumination Conditions

The proposed statistical search algorithm determines GgH, GgL, GmH, and GmL in the region of interest (ROI) for detecting lane markings. The procedures for determining the thresholds in each row are given as follows:

Step 1) Setting search windows: Set a window in each N-th row to search for GgH, GgL, GmH, and GmL. The width of the search window on the N-th row, Ww(N), is shown as (B-1). Here the left border of the search window is also the left border of the search region of the lane marking on the N-th row.

( )

5

( )

,

(

5

( ) )

,

m R m

w

R

w N if S w N

W N S otherwise

⎧ × ≥ ×

= ⎨⎪

⎪⎩ (B-1) where wm(N) is the estimated width of the lane marking on the N-th row. SR denotes the search region.

Fig. B-1. The gray level distribution with a row of lane marking in the search window.

Step 2) Finding zone of lane marking and ground in the window: Since the gray levels of lane markings are obviously higher than those of the ground’s, the distribution of gray levels in a search window can be divided into three main zones if a row of lane markings appears close to the center of the search window. The three main zones in sequence are a lowland, a plateau, and again a lowland of gray level groups. These three zones can be determined according to the representative bright and dark levels of the lane markings and the ground, which are respectively the average gray levels of lane markings and the ground. Let G denote the gray levels in M-coordinate in the search window, as shown in (B-2). Compute the pixel number of the lane marking and the ground in the window, respectively Am and Ag, by (B-3). Let a set, L, be the ordered gray levels of the pixels in G, which are arranged from large to small as in (B-4), where L1, LAw respectively represent the highest and lowest gray level in G. Lm is the average gray levels of lane markings, i. e. the average of the brightest Am pixels with the highest gray level among the set L. Lg is the average gray levels of the ground, also the average of the darkest Ag pixels with the lowest gray level among L as shown in (B-5). After finding the representative bright and dark levels of the lane markings and the ground, three zones of interest can be found based on the following definitions. In the search window, the left and right borders of the lane marking, MmL and MmR, is respectively defined as the leftest and rightest pixel whose gray levels are larger than Lm. The left border of ground, MgL, is defined as the pixel whose gray level is lower than Lg and being closest to MmL. The right border of the ground, MgR, is defined as the pixel with gray level lower than Lg and closest to MmR. Figure B-1 shows the gray level of each pixel in G when a row of the lane marking exists in the search window. As can be seen, the plateau zone, [MmL, MmR] of the lane marking in G can be found by Lm, and the lowland zone, union [MwL, MgL] and [MgR, MwR] of the ground by Lg.

[ ]

{

M | wL, wR

}

G= G MM M (B-2)

where GM denotes the gray level in M-coordinate. MwL and MwR respectively represent the left and right boundaries of the search window.

( ) ( )

the same as those in the previous row and go to step 6. Otherwise, shift the window rightward for the distance of wm(N) and return to step 2.

Step 6) Terminate the determination process of the N-th row, and export the results of GmH, GmL, GgH, and GgL.

References

[1] Z. Sum, G. Bebis, and R. Miller, “On-Road Vehicle Detection:A Review,” IEEE Trans.

Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 694-711, May 2006.

[2] W. Jones, “Keeping cars from Crashing,” IEEE Spectrum, vol. 38, no 9, pp. 40-45, 2001.

[3] W. Jones, “Building Safer Cars,”IEEE Spectrum, vol. 39, no 1, pp. 82-85, 2002.

[4] Bing-Fei Wu, Chuan-Tsai Lin, and Yen-Lin Chen, “Dynamic Calibration and Occlusion Handling Algorithms for Lane Tracking,” IEEE Trans. Industrial Electronics, vol 56, No 5, pp. 1757-1773, May 2009.

[5] Yen-Lin Chen, Bing-Fei Wu, Chuan-Tsai Lin, Chung-Jui Fan, and Chih-Ming Hsieh,

“Real-time Vision-based Vehicle Detection and Tracking on a Moving Vehicle for Nighttime Driver Assistance,” accepted for publication in International Journal of Robotics and Automation.

[6] Bing-Fei Wu, Chuan-Tsai Lin, and Chao-Jung Chen, “Real-time Lane and Vehicle Detection Based on a Single Camera Model,” accepted for publication in International Journal of Computers and Applications.

[7] Bing-Fei Wu and Chuan-Tsai Lin, “Real-Time Fuzzy Vehicle Detection Based on Contour Size Similarity,” Int. J. Fuzzy Systems, vol. 7, No. 2, pp. 54-62, June 2005.

[8] Bing-Fei Wu, Chuan-Tsai Lin, and Yen-Lin Chen, “Range and Size Estimation Based on a Coordinate Transformation Model for Driving Assistance Systems,” accepted for publication in IEICE Transactions on Information and Systems.

[9] Y. Chen, “Highway Overhead Structure Detection Using Video Image Sequences,”

IEEE Transactions on Intell. Trans. Syst., vol. 4, no.2, pp. 67-77, 2003.

[10] E. Segawa, M. Shiohara, S. Sasaki, N. Hashiguchi, T. Takashima, and M. Tohno,

“Preceding Vehicle Detection Using Stereo Images and Non-scanning Millimeter-Wave Radar,” IEICE Trans D: Information, E89-D, pp. 2101 - 2108, July 2006.

[11] N. Hautiere, R. Labayrade, and D. Aubert, “Estimation of the Visibility Distance by Stereovision: A Generic Approach,” IEICE Trans D: Information, E89-D, pp. 2084 –

2091, July 2006.

[12] S. Nedevschi, R. Danescu, D. Frentiu, T. Marita, F. Oniga, C. Pocol, R. Schmidt, T.

Graf, “High accuracy stereo vision system for far distance obstacle detection,” in Proc.

IEEE Intelligent Vehicles Symp., pp. 292 – 297, June 2004.

[13] L. L. Wang and W. H. Tsai, “Camera Calibration by Vanishing Lines for 3-D Computer Vision,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, no. 4, pp.

370-376, 1991.

[14] T. N. Schoepflin and D. J. Dailey, “Dynamic Camera Calibration of Roadside Traffic Management Cameras for Vehicle Speed Estimation,” IEEE Transactions on Intell.

Trans. Syst., vol. 4, no.2, pp. 90-98, 2003.

[15] Y. M. Liang, H. R. Tyan, S. L. Chang, H. Y. M. Liao, and S. W. Chen, “Video Stabilization for a Camcorder Mounted on a Moving Vehicle,” IEEE Transactions on Vehicular Technology, vol. 53, no.6, pp. 1636-1648, 2004.

[16] Y. Hwang, J. Seo, and H. Hong, “Key-Frame Selection and an LMedS-Based Approach to Structure and Motion Recovery,” IEICE Trans D: Information, E91-D, pp. 114 - 123, Jan. 2008.

[17] J. Wang, F. Shi, J. Zhang and Y. Liu, “A new calibration model of camera lens distortion,” Pattern Recognition, vol. 41, Issue 2, pp. 607-615, Feb. 2008.

[18] B. W. He and Y. F. Li, “Camera calibration from vanishing points in a vision system,”

Optics & Laser Technology, vol. 40, Issue 3, pp. 555-561, April 2008.

[19] K. T. Song and J. C. Tai, “Dynamic Calibration of Pan–Tilt–Zoom Cameras for Traffic Monitoring,” IEEE Transactions on Systems, Man and Cybernetics, Part B., vol. 36, pp.

1091-1103, Oct. 2006.

[20] A. Yilmaz, X. Li, and M. Shah, “Contour-Based Object Tracking with Occlusion Handling in Video Acquired Using Mobile Cameras,” IEEE Trans. Pattern Analysis

and Machine Intelligence, vol. 26, no. 11, pp. 1531-1536, 2004.

[21] S. F. Lin, J. Y. Chen, and H. X. Chao, “Estimation of Number of People in Crowded Scenes Using Perspective Transformation,” IEEE Trans. Syst., Man, Cybern. A, vol. 31, pp. 645-654, 2001.

[22] C. C. C. Pang, W. W. L. Lam, and N. H. C. Yung, “A Novel method for resolving vehicle occlusion in a monocular traffic-image sequence,” IEEE Trans. Intelligent Transport. Syst., vol. 5, no. 3, pp. 129-141, 2004.

[23] A. Broggi, M. Bertozzi, Lo Guarino, C. Bianco, and A. Piazzi, “Visual perception of obstacle and vehicles for platooning,” IEEE Trans. Intelligent Transport. Syst., vol. 1, no. 3, pp. 164-176, 2000.

[24] Y. L. Chen, Y. H. Chen, C. J. Chen, and B. F. Wu, “Nighttime Vehicle Detection for Driver Assistance and Autonomous Vehicles ”, in Proc. IAPR International Conference on Pattern Recognition, vol.1, pp. 687-690, 2006.

[25] A. Hidaka, K. Nishida, and T. Kurita, “Object Tracking by Maximizing Classification Score of Detector Based on Rectangle Features,” IEICE Trans. Inf. & Syst., vol.E91-D, no. 8, pp. 2163-2170, Aug. 2008.

[26] E. D. Dickmanns and B. D. Mysliwetz, “Recursive 3-D road and relative ego-state recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 199-213, Feb.

1992.

[27] E. D. Dickmanns and V. Graefe, “Dynamic monocular machine vision,” Machine Vision and Applications, vol. 1, pp. 223-240, 1988.

[28] E. D. Dickmanns and V. Graefe, “Applications of dynamic monocular machine vision,”

Machine Vision and Applications, vol. 1, pp. 241-261, 1988.

[29] M. Bertozzi and A. Broggi, “GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Trans. on Image Processing, vol. 7, pp.

62-81, Jan. 1998.

[30] M. Bertozzi and A. Broggi, “Vision-Based Vehicle Guidance,” Computer, vol. 30, pp.

49-55, July. 1997.

[31] C. Kreucher and S. Lakshmanan, “LANA: A Lane Extraction Algorithm that Uses Frequency Domain Features,” IEEE Trans. on Robotics and Automation, vol. 15, April 1999.

[32] V. Kastrinaki, M. Zervakis, and K. Kalaizakis, “A survey of video processing techniques for traffic applications,” Image Vis. Comput., vol. 21, no. 4, pp. 359-381, Apr. 2003.

[33] J. C. McCall and M. M. Trivedi, “Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation,” IEEE Trans. Intell. Transp. Syst., vol. 7, no.

1, pp.20-37, March 2006.

[34] B. F. Wu, and C. T. Lin, “Robust Image Measurement and Analysis Based on Perspective Transformations,” in Proc. IEEE Syst., Man, Cybern. Symp., pp.2390-2395, Oct. 2006.

[35] S. Nedevschi, C. Vancea, T. Marita, and T. Graf, “Online Extrinsic Parameters Calibration for Stereovision Systems Used in Far-Range Detection Vehicle Applications,” IEEE Trans. Intell. Transp. Syst., vol. 8, pp.651-660, Dec. 2007.

[36] Y. Motai and A. Kosaka, “Hand-Eye Calibration Applied to Viewpoint Selection for Robotic Vision,” IEEE Trans. Ind. Electron., vol. 54, no. 2, pp. 3731-3741, Oct. 2008.

[37] Bing-Fei Wu and Chuan-Tsai Lin, “Robust Lane Detection and Tracking for Driving Assistance Systems,” in Proc. IEEE Syst., Man, Cybern. Symp., pp. 3848-3853, Oct.

2007.

[38] M. Chen, T. Jochem, and D. Pomerleau, “AURORA: A Vision-Based Roadway Departure Warning System,” in Proc. IEEE Intelligent Robots Systems, pp. 243-248,

1995.

[39] K. Kluge and S. Lakshmanan, “A deformable-template approach to lane detection,” in Proc. IEEE Intelligent Vehicle Symp., 1995, pp. 54 – 59.

[40] A. Takahashi, Y. Ninomiya, M. Ohta, and K. Tange, “A Robust Lane Detection using Real-time Voting Processor,” in Proc. IEEE on Intelligent Transportation Systems, pp.

577-580, Oct. 1999.

[41] J. Goldbeck and B. Huertgen, “Lane Detection and Tracking by Video Sensors,” in Proc. IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems, pp. 74-79, Oct. 1999.

[42] P. Jeong and S. Nedevschi, “Local Difference Probability (LDP)-Based Environment Adaptive Algorithm for Unmanned Ground Vehicle” IEEE Trans. on Intell. Trans.

Syst., vol. 7, no.3, pp. 282-292, Sep. 2006.

[43] P. Jeong and S. Nedevschi, “Efficient and robust classification method using combined feature vector for lane detection,” IEEE Trans. Circuits and Systems for Video Technology, vol. 15, pp. 528-537, April 2005.

[44] Y. He, H. Wang, and B. Zhang. “Color-Based Road Detection in Urban Traffic Scenes,” IEEE Trans. on Intell. Transp. Syst., vol. 5, no.4, pp. 309-318, Dec. 2004.

[45] B. Fardi and G. Wanielik, “Hough transformation based approach for road border detection in infrared images,” in Proc. IEEE Intelligent Vehicles Symp., Parma, Italy, June 2004, pp. 549-554.

[46] C. R. Jung and C. R. Kelber, “Lane following and lane departure using a linear-parabolic model,” Image and Vision Computing, vol 23, pp. 1192-1202, Nov.

2005.

[47] A. Broggi, M. Cellarario, P. Lombardi, and M. Porta, “An evolutionary approach to visual sensing for vehicle navigation,” IEEE Trans. Ind. Electron., vol. 50, no. 1, pp.

18-29, Feb. 2003.

[48] R. Arnay, L. Acosta, M. Sigut, and J. Toledo, “Ant colony optimisation algorithm for detection and tracking of non-structured roads,” Electronics Letters, vol 44, pp. 725-727, June 2008.

[49] C. F. Juang, C. M. Lu, C. Lo, and C. Y. Wang, “Ant Colony Optimization Algorithm for Fuzzy Controller Design and Its FPGA Implementation,” IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 1453-1462, Mar. 2008.

[50] K. Sundareswaran, K. Jayant, and T. N. Shanavas, “Inverter Harmonic Elimination Through a Colony of Continuously Exploring Ants,” IEEE Trans. Ind. Electron., vol.

54, no. 5, pp. 2558-2565, Oct. 2007.

[51] C. D’Cruz and J. J. Zou, “Lane detection for driver assistance and intelligent vehicle applications,” in Proc. International Symposium on Communications and Information Technologies, pp. 1291-1296, Oct. 2007.

[52] T. Liu, N. Zheng, H. Cheng, and Z. Xing, “A novel approach of road recognition based on deformable template and genetic algorithm,” in Proc. IEEE Intell. Transp. Syst., pp.

1251-1256, Oct. 2003.

[53] S. Sehestedt, S. Kodagoda, A. Alempijevic, and G. Dissanayake, “Robust lane detection in urban environments,” in Proc. IEEE Intelligent Robots and systems, pp. 123-128, Oct.

2007.

[54] C. Caraffi, S. Cattani, P. Grisleri, “Off-Road Path and Obstacle Detection Using Decision Networks and Stereo Vision,” IEEE Trans. Intell. Transp. Syst., vol. 8, pp.607-618, Dec. 2007.

[55] Q. Li, N. Zheng, and H. Cheng, “Springrobot A Prototype Autonomous Vehicle and Its Algorithms for Lane Detection,” IEEE Trans. on Intell. Transp. Syst., vol. 5, no.4, pp.

300-308, Dec. 2004.

[56] H. Y. Cheng, B. S. Jeng, P. T. Tseng, and K. C. Fan, “Lane Detection With Moving Vehicles in the Traffic Scenes,” IEEE Trans. on Intell. Trans. Syst., vol. 7, no.4, pp.

571-582, Dec. 2006.

[57] Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using spline model,” Pattern Recognition Letters, vol.21 , pp. 677-689, July 2000.

[57] Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using spline model,” Pattern Recognition Letters, vol.21 , pp. 677-689, July 2000.

相關文件