• 沒有找到結果。

Conclusions and Future Works

In this thesis, we propose a lane line detection and tracking system utilizing the time-slice images. The system architecture consists of four modules, including (1) Pre-processing, (2) Vanishing Point Computation and Row of Interest (ROI) Setting, (3) Lane Line Detection and Verification, and (4) Lane Line Tracking. Pre-processing is performed by RGB to grayscale, image smoothing, image normalization, and edge detection for obtaining the edge feature of lane lines. Vanishing Point Computation and Row of Interest (ROI) Setting consists of Otsu binarization, Hough transformation, vanishing point computation, and ROI setting for locating the vanishing point and delimiting our ROIs. Time-slice image generation, gradient value adjustment, gradient value smoothing, peak finding, peak connecting, candidate lane line detection, and lane line verification are utilized for extracting the lane lines in the image in Lane Line Detection and Verification. The gradient value adjustment algorithm is proposed to overcome the sparseness problem in detecting the dashed lane lines. At last, Lane Line Tracking applies the prediction procedure to track a lane line in the time slice images. Since we consider the location information of a lane line from previous images to constrain the probable lane detection in the current image and only process on the ROIs instead of the whole image, the processing time of an image is reduced.

The testing images of the car video clips are captured from the camera mounted on the upper center of windshield of the vehicle and we focus on the intermediate case of driving from the straight lane lines to curve lane lines and the lane changing case. The experimental results show that our proposed methods can improve the recognition of the lane line. However, there are still two problems needed to be resolved, as

59

described in Section 4.3.

Thus, some interesting issues based on time-slice images for the lane lines extraction are worthy of further investigation. For the future works, we have some suggestions:

(1) While the vanishing point is determined, how many rows in the image should be selected as our ROIs? More ROIs can facilitate the lane line detection and describe the shape of the lane lines in detail, but the processing time increases accordingly.

(2) After selecting the new lane line points on each ROI, how to set constrains to check whether the shape of the lane line is correct? Though utilizing the time-slice images to track the lane line points is an intuitive approach, the positions of the lane lines easily suffer from the noises such as vehicles and shadows in the image. Hence, more features should be taken into consideration for eliminating noises in the image and to extract the lane lines more efficiently.

(3) To what extent of smoothing in the gradient histogram is suitable for the system to detect the feature points of the lane lines?

(4) If a system not only detects the lane line positions, but also recognizes the types of lane lines such as solid or dashed, single or double, yellow or white, the drivers can have better understanding about the driving environment. The drivers also can protect themselves in advance of a possible accident on the road.

(5) In this thesis, we only discuss the intermediate case of driving from the straight lane lines to curve lane lines or the lane changing case. Other cases under the different weather conditions or different scenarios should be taken into consideration for constructing a more robust lane line detection system.

60

Bibliography

[1] O. Gietelink, J. Ploeg, B. De Schutter, and M. Verhaegen, “Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations,” Vehicle System Dynamics, vol. 44, no. 7, pp. 569–590, July 2006.

[2] S. Ezell, “Explaining International IT Application Leaderhip : Intelligent Transportation Systems”, The Information Technology & Innovation Foundation, January 2010. http://www.itif.org/files/2010-1-27-ITS_Leadership.pdf

[3] J. C. McCall and M. M. Trivedi, “Video-based lane detection estimation and tracking for driver assistance: Survey, system, and evaluation,” IEEE Transactions on Intelligent Transportation System, vol. 7, no. 1, pp. 20-37, March 2006.

[4] M. Aly, “Real time detection of lane markers in urban streets,” in Proc. IEEE Intelligent Vehicles Symposium, pp. 7-12, June 4–6, 2008.

[5] D. Khosla, “Accurate estimation of forward path geometry using two-clothoid road model,” IEEE Intelligent Vehicles Symposium, vol. 1, pp. 154-159, June 17-21, 2002.

[6] S. Nedevschi, R. Schmidt, T. Graf, R. Danescu, D. Frentiu, T. Marita, F. Oniga, and C. Pocol, “3D lane detection system based on stereovision,” in Proc. IEEE Intelligent Transportation Systems Conference, Washington, DC, pp. 161-166, Oct. 3-6, 2004.

[7] Y. Otsuka, S. Muramatsu, H. Takenaga, Y. Kobayashi, and T. Monj, “Multitype lane markers recognition using local edge direction,” in Proc. IEEE Intelligent Vehicles Symposium, pp. 604-609, Jun. 2002.

[8] C. Rasmussen, “Combining laser range, color, and texture cues for autonomous

61

road following,” in Proc. IEEE Int. Conf. Robot. Autom., pp. 4320-4325, Aug.

2002.

[9] R. Tapia-Espinoza and M. Torres-Torriti, “A comparison of gradient versus color and texture analysis for lane detection and tracking,” in Proc. Latin Amer. Robot.

Symp., pp. 1-6, Oct. 2009.

[10] C. Kreucher and S. Lakshmanan, “LANA: A lane extraction algorithm that uses frequency domain features,” IEEE Transactions on Robotics and Automation, vol.

15, no. 2, pp. 343-350, Apr. 1999.

[11] Q. Li, N. Zheng, and H. Cheng, “Springrobot: A prototype autonomous vehicle and its algorithms for lane detection,” IEEE Trans. on Intelligent Transportation Systems, vol. 5, no. 4, pp. 300-308, Dec. 2004.

[12] D. Kang and M. Jung, “Road lane segmentation using dynamic programming for active safety vehicles,” Pattern Recognit. Lett., vol. 24, no. 16, pp. 3177-3185, Dec. 2003.

[13] Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using B-snake,”

Image and Vision Computing, vol. 22, no. 4, pp. 269-280, 2004.

[14] A. Borkar, M. Hayes, and M. Smith, “A novel lane detection system with efficient ground truth generation,” IEEE Transactions on Intelligent Transportation Systems, vol.13, no. 1, pp. 365-374, March 2012.

[15] N. Apostoloff and A. Zelinsky, “Robust vision based lane tracking using multiple cues and particle filtering,” in Proc. IEEE Intell. Veh. Symp., pp. 558–563, Jun.

2003.

[16] Z. Kim, “Robust lane detection and tracking in challenging scenarios,” IEEE Trans. Intell. Transp. Syst., vol. 9, no. 1, pp. 16-26, Mar. 2008.

[17] A. Broggi, “Robust real-time lane and road detection in critical shadow conditions,” Proceedings of the IEEE International Symposium on Computer

62

Vision, Coral Gables, Florida, pp. 353-358, November 19-21, 1995.

[18] A. Broggi, and S. Berte, “Vision-based road detection in automotive systems: A real-time expectation-driven approach,” Journal of Artificial Intelligence Research, vol. 3, pp. 325-348, 1995.

[19] M. Bertozzi, and A. Broggi, “GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Trans. on Image Processing, vol.7, no.1, pp.62-81, Jan. 1998.

[20] Y. He, H. Wang, and B. Zhang, “Color-based road detection in urban traffic scenes,” IEEE Transactions on Intelligent Transportation System, vol. 5, no. 4, pp.

309-318, Dec. 2004.

[21] H.Y. Cheng, B.S. Jeng, P.T. Tseng, and K.C. Fan, “Lane Detection with Moving Vehicles in the Traffic Scenes,”IEEE Transactions on Intelligent Transportation System, vol. 7, no. 4, pp. 571-582, Dec. 2006.

[22] Y. R. Huang, Y. L. Pan, "Fast Algorithm for Structured and Unstructured Road Detection", IEEE International Congress on Image and Signal Processing, pp. 1-5, 2009.

[23] L.W. Tsai, J.W. Hsieh, C.H. Chuang, and K.C. Fan, “Lane detection using directional random walks,” IEEE Intelligent Vehicles Symposium, pp.303-306, June 4-6, 2008.

[24] Y. U. Yim, and S. Y. Oh, “Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving,” IEEE Transactions on Intelligent Transportation System, vol. 4, pp. 219–225, Dec. 2003.

[25] K. Kluge, and S. Lakshmanan, “A deformable-template approach to lane detection,” in Proceedings of the IEEE Intelligent Vehicles Symposium, Detroit, pp. 54–59, September 25-26, 1995.

[26] Canny, J., “A computational approach to edge detection,” IEEE Trans. Pattern

63

Analysis and Machine Intelligence, vol.8, no.6, pp.679-698, Nov. 1986.

[27] Steven M. Kay, Fundamentals of Statistical Processing, Volume I: Estimation Theory, Prentice Hall Signal Processing Series, 1993.

[28] J. Wang, Y. Wu, Z. Liang, and Y. Xi, “Lane detection based on random hough transform on region of interesting,” Proc. IEEE International Conference on Information and Automation (ICIA), pp. 1735-1740, 2010.

[29] Y. Wang, D. Shen, and E. K. Teoh, “Lane Detection Using Catmull-Rom Spline,”

in Proc. IEEE Intelligent Vehicles Symposium, pp. 51-57, Oct. 1998.

[30] J. W. Park, J. W. Lee, and K. Y. Jhang, “A Lane-Curve Detection Based on An LCF,” Pattern Recognition Letters, vol. 24, no. 14, pp. 2301-2313, Oct. 2003.

[31] C. Xu, and J. L. Prince, “Snakes, shapes, and gradient vector flow,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 359-369, 1998.

[32] E. Catmull, and R. Rom, “A class of local interpolating splines,” in Computer Aided Geometric Design, New York, pp. 317-326, 1974.

[33] N. Otsu, “A threshold selection method from gray level histograms,” IEEE Transactions on Systems, Man and Cybernetics, vol. 9, pp. 62-66, 1979.

[34] P. V. C. Hough, “Method and means for recognizing complex patterns,” U.S.

Patent 3,069,654, Dec. 18, 1962.

[35] S. Y. Chen, J. W. Hsieh, and D. Y. Chen, “Jointing Edge Labeling and Geometrical Constraint for Lane Detection and its Application to Suspicious Driving Behavior Analysis,” Journal of Information Science and Engineering (JISE), vol. 27, no. 2, pp. 715-732, March 2011.

[36] B. R. Nadra, H. Mohamed, and B.A. Hanene, “A Comparative Study of Vision-based Lane Detection Methods”, In Advanced Concepts for Intelligent Vision Systems (ACIVS), Belgium, pp. 46-57, 2011.

[37] B. Jahne, “Digital Image Processing,” Springer Verlag, 2002.

64

[38] R. C. Gonzalez, and R. E. Woods, Digital Image Processing (3rd Edition), Prentice Hall, 2008. http://folk.uib.no/eha070/mat262/lectures%202011/

DIP3ELecture11_2011.pdf

[39] M. E. Wall, A. Rechtsteiner, and L. Rocha, “Singular Value Decomposition and Principal Component Analysis”, A Practical Approach to Microarray Data Analysis. (Berrar DP, Dubitzky W, Granzow M, eds.), Kluwer: Norwell, pp.

91-109, Mar 2003.

[40] D. Leykekhman, “Lecture 9. Linear Least Squares. Using SVD Decomposition”, 2008. http://www.math.uconn.edu/~leykekhman/courses/MATH3795/Lectures/

Lecture_9_Linear_least_squares_SVD.pdf

[41] S. S. Huang, C. J. Chen, P. Y. Hsiao, and L. C. Fu, “On-Board Vision System for Lane Recognition and Front-Vehicle Detection to Enhance Driver’s Awareness,”

IEEE Intl. Conf. on Robotics and Automation, New Orleans, Los Angeles, USA, pp. 2456-2461, 2004.

[42] H.A. Mallot, H.H. Biilthoff, J.J. Little, and S. Bohrer, “Inverse perspective mapping simplifies optical flow computation and obstacle detection,” in Biological Cybernetics, vol. 64, no. 3, pp. 177-185, 1991.

[43] V. Cantoni, L. Lombardi, M. Porta, and N. Sicard, “Vanishing Point Detection:

Representation Analysis and New Approaches,” in Proceedings of 11th IEEE International Conference on Image Analysis and Processing, pp. 90-94, Sept.

26-28, 2001.

[44] C. C. Wang, S. S. Huang, and L. C. Fu, “Driver assistance system for lane detection and vehicle recognition with night vision,” IEEE Intl. Conf. on Intelligent Robots and Systems, Alberta, Canada, pp. 3530-3535, 2005.

相關文件