• 沒有找到結果。

Chapter 5 Conclusion and Future Works

5.4 Future works

Some directions for future study are recommended below:

(1) In the future, simultaneous detections of several lanes and vehicles will be conducted and the obtained information will be applied to the throttle and brake systems of vehicles to support the automatic driving of intelligent autonomous vehicles.

(2) The changeable illumination and weather condition of outdoor surroundings and the high speed of vehicle movements increase the difficulty of lane and vehicle detection.

More robust and rapid approaches should be proposed to make the driving assistance systems real-time and adaptive.

(3) Besides vehicles, pedestrians, motorcycles and other obstacles also can affect driving safety. Therefore, techniques for detecting those objects should be developed to increase the feasibility of the driving assistance system and improve driving safety.

A

PPENDIX

A

Relation of Projected Width and v-coordinate

If the lane width is WWL and the projective lane width on the v-coordinate is wL (v), then (A-1) can be obtained from (1) and (2). (A-2) means the first derivative for v to wL (v). Let ξ = (π/2- α), and τ = tan-1(v/λ). Then (A-4) and (A-5) can be derived from (A-2) and (A-3). Since the camera was placed in a vehicle to detect the lane, when α is large, the farther part of the lane would not appear in the image. Therefore, α is usually between 0-6 degrees. In the study, let the tilt angle α<10°, and then the value of ξ will be larger than 80°, and they are substituted in (A-6)(A-7). Next, they are applied to (A-5) to obtain (A-8) and (A-9). (A-9) shows the first derivative of wL (v) is a constant. The relation between wL (v) and v can be expressed by a linear equation as (A-10).

( ) ( ) ( ) ( )

A

PPENDIX

B

Adaptation to Illumination Conditions

The proposed statistical search algorithm determines GgH, GgL, GmH, and GmL in the region of interest (ROI) for detecting lane markings. The procedures for determining the thresholds in each row are given as follows:

Step 1) Setting search windows: Set a window in each N-th row to search for GgH, GgL, GmH, and GmL. The width of the search window on the N-th row, Ww(N), is shown as (B-1). Here the left border of the search window is also the left border of the search region of the lane marking on the N-th row.

( )

5

( )

,

(

5

( ) )

,

m R m

w

R

w N if S w N

W N S otherwise

⎧ × ≥ ×

= ⎨⎪

⎪⎩ (B-1) where wm(N) is the estimated width of the lane marking on the N-th row. SR denotes the search region.

Fig. B-1. The gray level distribution with a row of lane marking in the search window.

Step 2) Finding zone of lane marking and ground in the window: Since the gray levels of lane markings are obviously higher than those of the ground’s, the distribution of gray levels in a search window can be divided into three main zones if a row of lane markings appears close to the center of the search window. The three main zones in sequence are a lowland, a plateau, and again a lowland of gray level groups. These three zones can be determined according to the representative bright and dark levels of the lane markings and the ground, which are respectively the average gray levels of lane markings and the ground. Let G denote the gray levels in M-coordinate in the search window, as shown in (B-2). Compute the pixel number of the lane marking and the ground in the window, respectively Am and Ag, by (B-3). Let a set, L, be the ordered gray levels of the pixels in G, which are arranged from large to small as in (B-4), where L1, LAw respectively represent the highest and lowest gray level in G. Lm is the average gray levels of lane markings, i. e. the average of the brightest Am pixels with the highest gray level among the set L. Lg is the average gray levels of the ground, also the average of the darkest Ag pixels with the lowest gray level among L as shown in (B-5). After finding the representative bright and dark levels of the lane markings and the ground, three zones of interest can be found based on the following definitions. In the search window, the left and right borders of the lane marking, MmL and MmR, is respectively defined as the leftest and rightest pixel whose gray levels are larger than Lm. The left border of ground, MgL, is defined as the pixel whose gray level is lower than Lg and being closest to MmL. The right border of the ground, MgR, is defined as the pixel with gray level lower than Lg and closest to MmR. Figure B-1 shows the gray level of each pixel in G when a row of the lane marking exists in the search window. As can be seen, the plateau zone, [MmL, MmR] of the lane marking in G can be found by Lm, and the lowland zone, union [MwL, MgL] and [MgR, MwR] of the ground by Lg.

[ ]

{

M | wL, wR

}

G= G MM M (B-2)

where GM denotes the gray level in M-coordinate. MwL and MwR respectively represent the left and right boundaries of the search window.

( ) ( )

the same as those in the previous row and go to step 6. Otherwise, shift the window rightward for the distance of wm(N) and return to step 2.

Step 6) Terminate the determination process of the N-th row, and export the results of GmH, GmL, GgH, and GgL.

References

[1] Z. Sum, G. Bebis, and R. Miller, “On-Road Vehicle Detection:A Review,” IEEE Trans.

Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 694-711, May 2006.

[2] W. Jones, “Keeping cars from Crashing,” IEEE Spectrum, vol. 38, no 9, pp. 40-45, 2001.

[3] W. Jones, “Building Safer Cars,”IEEE Spectrum, vol. 39, no 1, pp. 82-85, 2002.

[4] Bing-Fei Wu, Chuan-Tsai Lin, and Yen-Lin Chen, “Dynamic Calibration and Occlusion Handling Algorithms for Lane Tracking,” IEEE Trans. Industrial Electronics, vol 56, No 5, pp. 1757-1773, May 2009.

[5] Yen-Lin Chen, Bing-Fei Wu, Chuan-Tsai Lin, Chung-Jui Fan, and Chih-Ming Hsieh,

“Real-time Vision-based Vehicle Detection and Tracking on a Moving Vehicle for Nighttime Driver Assistance,” accepted for publication in International Journal of Robotics and Automation.

[6] Bing-Fei Wu, Chuan-Tsai Lin, and Chao-Jung Chen, “Real-time Lane and Vehicle Detection Based on a Single Camera Model,” accepted for publication in International Journal of Computers and Applications.

[7] Bing-Fei Wu and Chuan-Tsai Lin, “Real-Time Fuzzy Vehicle Detection Based on Contour Size Similarity,” Int. J. Fuzzy Systems, vol. 7, No. 2, pp. 54-62, June 2005.

[8] Bing-Fei Wu, Chuan-Tsai Lin, and Yen-Lin Chen, “Range and Size Estimation Based on a Coordinate Transformation Model for Driving Assistance Systems,” accepted for publication in IEICE Transactions on Information and Systems.

[9] Y. Chen, “Highway Overhead Structure Detection Using Video Image Sequences,”

IEEE Transactions on Intell. Trans. Syst., vol. 4, no.2, pp. 67-77, 2003.

[10] E. Segawa, M. Shiohara, S. Sasaki, N. Hashiguchi, T. Takashima, and M. Tohno,

“Preceding Vehicle Detection Using Stereo Images and Non-scanning Millimeter-Wave Radar,” IEICE Trans D: Information, E89-D, pp. 2101 - 2108, July 2006.

[11] N. Hautiere, R. Labayrade, and D. Aubert, “Estimation of the Visibility Distance by Stereovision: A Generic Approach,” IEICE Trans D: Information, E89-D, pp. 2084 –

2091, July 2006.

[12] S. Nedevschi, R. Danescu, D. Frentiu, T. Marita, F. Oniga, C. Pocol, R. Schmidt, T.

Graf, “High accuracy stereo vision system for far distance obstacle detection,” in Proc.

IEEE Intelligent Vehicles Symp., pp. 292 – 297, June 2004.

[13] L. L. Wang and W. H. Tsai, “Camera Calibration by Vanishing Lines for 3-D Computer Vision,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, no. 4, pp.

370-376, 1991.

[14] T. N. Schoepflin and D. J. Dailey, “Dynamic Camera Calibration of Roadside Traffic Management Cameras for Vehicle Speed Estimation,” IEEE Transactions on Intell.

Trans. Syst., vol. 4, no.2, pp. 90-98, 2003.

[15] Y. M. Liang, H. R. Tyan, S. L. Chang, H. Y. M. Liao, and S. W. Chen, “Video Stabilization for a Camcorder Mounted on a Moving Vehicle,” IEEE Transactions on Vehicular Technology, vol. 53, no.6, pp. 1636-1648, 2004.

[16] Y. Hwang, J. Seo, and H. Hong, “Key-Frame Selection and an LMedS-Based Approach to Structure and Motion Recovery,” IEICE Trans D: Information, E91-D, pp. 114 - 123, Jan. 2008.

[17] J. Wang, F. Shi, J. Zhang and Y. Liu, “A new calibration model of camera lens distortion,” Pattern Recognition, vol. 41, Issue 2, pp. 607-615, Feb. 2008.

[18] B. W. He and Y. F. Li, “Camera calibration from vanishing points in a vision system,”

Optics & Laser Technology, vol. 40, Issue 3, pp. 555-561, April 2008.

[19] K. T. Song and J. C. Tai, “Dynamic Calibration of Pan–Tilt–Zoom Cameras for Traffic Monitoring,” IEEE Transactions on Systems, Man and Cybernetics, Part B., vol. 36, pp.

1091-1103, Oct. 2006.

[20] A. Yilmaz, X. Li, and M. Shah, “Contour-Based Object Tracking with Occlusion Handling in Video Acquired Using Mobile Cameras,” IEEE Trans. Pattern Analysis

and Machine Intelligence, vol. 26, no. 11, pp. 1531-1536, 2004.

[21] S. F. Lin, J. Y. Chen, and H. X. Chao, “Estimation of Number of People in Crowded Scenes Using Perspective Transformation,” IEEE Trans. Syst., Man, Cybern. A, vol. 31, pp. 645-654, 2001.

[22] C. C. C. Pang, W. W. L. Lam, and N. H. C. Yung, “A Novel method for resolving vehicle occlusion in a monocular traffic-image sequence,” IEEE Trans. Intelligent Transport. Syst., vol. 5, no. 3, pp. 129-141, 2004.

[23] A. Broggi, M. Bertozzi, Lo Guarino, C. Bianco, and A. Piazzi, “Visual perception of obstacle and vehicles for platooning,” IEEE Trans. Intelligent Transport. Syst., vol. 1, no. 3, pp. 164-176, 2000.

[24] Y. L. Chen, Y. H. Chen, C. J. Chen, and B. F. Wu, “Nighttime Vehicle Detection for Driver Assistance and Autonomous Vehicles ”, in Proc. IAPR International Conference on Pattern Recognition, vol.1, pp. 687-690, 2006.

[25] A. Hidaka, K. Nishida, and T. Kurita, “Object Tracking by Maximizing Classification Score of Detector Based on Rectangle Features,” IEICE Trans. Inf. & Syst., vol.E91-D, no. 8, pp. 2163-2170, Aug. 2008.

[26] E. D. Dickmanns and B. D. Mysliwetz, “Recursive 3-D road and relative ego-state recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 199-213, Feb.

1992.

[27] E. D. Dickmanns and V. Graefe, “Dynamic monocular machine vision,” Machine Vision and Applications, vol. 1, pp. 223-240, 1988.

[28] E. D. Dickmanns and V. Graefe, “Applications of dynamic monocular machine vision,”

Machine Vision and Applications, vol. 1, pp. 241-261, 1988.

[29] M. Bertozzi and A. Broggi, “GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Trans. on Image Processing, vol. 7, pp.

62-81, Jan. 1998.

[30] M. Bertozzi and A. Broggi, “Vision-Based Vehicle Guidance,” Computer, vol. 30, pp.

49-55, July. 1997.

[31] C. Kreucher and S. Lakshmanan, “LANA: A Lane Extraction Algorithm that Uses Frequency Domain Features,” IEEE Trans. on Robotics and Automation, vol. 15, April 1999.

[32] V. Kastrinaki, M. Zervakis, and K. Kalaizakis, “A survey of video processing techniques for traffic applications,” Image Vis. Comput., vol. 21, no. 4, pp. 359-381, Apr. 2003.

[33] J. C. McCall and M. M. Trivedi, “Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation,” IEEE Trans. Intell. Transp. Syst., vol. 7, no.

1, pp.20-37, March 2006.

[34] B. F. Wu, and C. T. Lin, “Robust Image Measurement and Analysis Based on Perspective Transformations,” in Proc. IEEE Syst., Man, Cybern. Symp., pp.2390-2395, Oct. 2006.

[35] S. Nedevschi, C. Vancea, T. Marita, and T. Graf, “Online Extrinsic Parameters Calibration for Stereovision Systems Used in Far-Range Detection Vehicle Applications,” IEEE Trans. Intell. Transp. Syst., vol. 8, pp.651-660, Dec. 2007.

[36] Y. Motai and A. Kosaka, “Hand-Eye Calibration Applied to Viewpoint Selection for Robotic Vision,” IEEE Trans. Ind. Electron., vol. 54, no. 2, pp. 3731-3741, Oct. 2008.

[37] Bing-Fei Wu and Chuan-Tsai Lin, “Robust Lane Detection and Tracking for Driving Assistance Systems,” in Proc. IEEE Syst., Man, Cybern. Symp., pp. 3848-3853, Oct.

2007.

[38] M. Chen, T. Jochem, and D. Pomerleau, “AURORA: A Vision-Based Roadway Departure Warning System,” in Proc. IEEE Intelligent Robots Systems, pp. 243-248,

1995.

[39] K. Kluge and S. Lakshmanan, “A deformable-template approach to lane detection,” in Proc. IEEE Intelligent Vehicle Symp., 1995, pp. 54 – 59.

[40] A. Takahashi, Y. Ninomiya, M. Ohta, and K. Tange, “A Robust Lane Detection using Real-time Voting Processor,” in Proc. IEEE on Intelligent Transportation Systems, pp.

577-580, Oct. 1999.

[41] J. Goldbeck and B. Huertgen, “Lane Detection and Tracking by Video Sensors,” in Proc. IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems, pp. 74-79, Oct. 1999.

[42] P. Jeong and S. Nedevschi, “Local Difference Probability (LDP)-Based Environment Adaptive Algorithm for Unmanned Ground Vehicle” IEEE Trans. on Intell. Trans.

Syst., vol. 7, no.3, pp. 282-292, Sep. 2006.

[43] P. Jeong and S. Nedevschi, “Efficient and robust classification method using combined feature vector for lane detection,” IEEE Trans. Circuits and Systems for Video Technology, vol. 15, pp. 528-537, April 2005.

[44] Y. He, H. Wang, and B. Zhang. “Color-Based Road Detection in Urban Traffic Scenes,” IEEE Trans. on Intell. Transp. Syst., vol. 5, no.4, pp. 309-318, Dec. 2004.

[45] B. Fardi and G. Wanielik, “Hough transformation based approach for road border detection in infrared images,” in Proc. IEEE Intelligent Vehicles Symp., Parma, Italy, June 2004, pp. 549-554.

[46] C. R. Jung and C. R. Kelber, “Lane following and lane departure using a linear-parabolic model,” Image and Vision Computing, vol 23, pp. 1192-1202, Nov.

2005.

[47] A. Broggi, M. Cellarario, P. Lombardi, and M. Porta, “An evolutionary approach to visual sensing for vehicle navigation,” IEEE Trans. Ind. Electron., vol. 50, no. 1, pp.

18-29, Feb. 2003.

[48] R. Arnay, L. Acosta, M. Sigut, and J. Toledo, “Ant colony optimisation algorithm for detection and tracking of non-structured roads,” Electronics Letters, vol 44, pp. 725-727, June 2008.

[49] C. F. Juang, C. M. Lu, C. Lo, and C. Y. Wang, “Ant Colony Optimization Algorithm for Fuzzy Controller Design and Its FPGA Implementation,” IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 1453-1462, Mar. 2008.

[50] K. Sundareswaran, K. Jayant, and T. N. Shanavas, “Inverter Harmonic Elimination Through a Colony of Continuously Exploring Ants,” IEEE Trans. Ind. Electron., vol.

54, no. 5, pp. 2558-2565, Oct. 2007.

[51] C. D’Cruz and J. J. Zou, “Lane detection for driver assistance and intelligent vehicle applications,” in Proc. International Symposium on Communications and Information Technologies, pp. 1291-1296, Oct. 2007.

[52] T. Liu, N. Zheng, H. Cheng, and Z. Xing, “A novel approach of road recognition based on deformable template and genetic algorithm,” in Proc. IEEE Intell. Transp. Syst., pp.

1251-1256, Oct. 2003.

[53] S. Sehestedt, S. Kodagoda, A. Alempijevic, and G. Dissanayake, “Robust lane detection in urban environments,” in Proc. IEEE Intelligent Robots and systems, pp. 123-128, Oct.

2007.

[54] C. Caraffi, S. Cattani, P. Grisleri, “Off-Road Path and Obstacle Detection Using Decision Networks and Stereo Vision,” IEEE Trans. Intell. Transp. Syst., vol. 8, pp.607-618, Dec. 2007.

[55] Q. Li, N. Zheng, and H. Cheng, “Springrobot A Prototype Autonomous Vehicle and Its Algorithms for Lane Detection,” IEEE Trans. on Intell. Transp. Syst., vol. 5, no.4, pp.

300-308, Dec. 2004.

[56] H. Y. Cheng, B. S. Jeng, P. T. Tseng, and K. C. Fan, “Lane Detection With Moving Vehicles in the Traffic Scenes,” IEEE Trans. on Intell. Trans. Syst., vol. 7, no.4, pp.

571-582, Dec. 2006.

[57] Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using spline model,” Pattern Recognition Letters, vol.21 , pp. 677-689, July 2000.

[58] H. Lin, S. Ko, W. shi, Y. Kim, and H. Kim, “Lane departure identification on highway with searching the region of interest on hough space,” in Proc. International Conference on Control, Automation and Systems, pp.1088-1091, Oct. 2007.

[59] R. Chapuis, R. Aufrere, and F. Chausse, “Accurate Road following and Reconstruction by Computer Vision,” IEEE Trans. on Intelligent Transportation Systems, vol. 3, Dec.

2002.

[60] M. Bertozzi and A. Broggi, and A. Fascioli, “Obstacle and lane detection on ARGO autonomous vehicle,” in Proc. IEEE Intelligent Transportation System Conf. ’97, Boston, MA. pp. 1010-1015, 1997.

[61] M. Zayed and J. Boonaert. “Obstacles detection from disparity properties in a particular Stereo vision system configuration,” in Proc. IEEE Intelligent Transportation Systems, pp. 311-316, 2003.

[62] M. Bertozzi, A. Broggi, M. Cellario, A. Fascioli, P. Lombardi, and M. Porta, “Artificial vision in road vehicles,” in Proc. IEEE, 2002 , pp.1258 – 1271.

[63] J. Chu, L. Ji, L. Guo, B. Li, and R. Wang, “Study on method of detecting preceding vehicle based on monocular camera,” in Proc. IEEE Intelligent Vehicles Symp., Parma, Italy, pp. 750 – 755, June 2004.

[64] U. Franke and S. Heinrich, “A Study on Recognition of Road Lane and Movement of Vehicles using Vision System,” SICE, Nagoya, pp. 38-41, July 2001.

[65] Z. Sum, G. Bebis, and R. Miller, “Monocular Precrash Vehicle Detection: Features and

Classifiers,” IEEE Trans. on Image Processing, vol. 15, no. 7, pp. 2019-2034, July 2006.

[66] Z. Sum, G. Bebis, and R. Miller, “On-Road Vehicle Detection Using Evolutionary Gabor Filter Optimization,” IEEE Trans. Intell. Transp. Syst., vol. 6, no. 2, pp. 252-260, June 2005.

[67] U. Franke and S. Heinrich, “Fast Obstacle Detection for Urban Traffic Situations,”

IEEE Trans. Intell. Trans. Syst., vol. 3, pp. 173-181, Sept. 2002.

[68] U. Franke, “Real-time stereo vision for urban traffic scene understanding,” in proc.

IEEE Intelligent Vehicles. Detroit, MI, pp. 273-278, Oct. 2000.

[69] M. Watanabe, N. Takeda, and K. Onoguchi, “A Moving Object Recognition Method by Optical Flow Analysis,” in Proc. ICPR, pp. 528-533, 1996.

[70] B. F. Wu and C. T. Lin, “A Fuzzy Vehicle Detection Based on Contour Size Similarity,” in Proc. IEEE Intelligent Vehicles Symp.,pp. 495-500, June 2005.

[71] B. F. Wu, C.T. Lin , and C. J. Chen , “A Fast Lane and Vehicle Detection Approach for Autonomous Vehicles,” in Proc. the 7th IASTED International Conference Signal and Image Processing, pp. 306-310, Aug. 2005.

[72] H. Sawano and M. Okada, “A Road Extraction Method by an Active Contour Model with Inertia and Differential Features,” IEICE Trans. Inf. & Syst., vol.E89-D, no. 7, pp.

2257-2267, July 2006.

[73] K. Sakurai, S. Kyo, and S. Okazaki, “Overtaking Vehicle Detection Method and Its Implementation Using IMAPCAR Highly Parallel Image Processor,” IEICE Trans. Inf.

& Syst., vol.E91-D, no. 7, pp. 1899-1905, July 2008.

[74] S. K. Joo, Y. Kim, S. I. Cho, K. Choi, and K. Lee, “Traffic Light Detection Using Rotated Principal Component Analysis for Video-Based Car Navigation System,”

IEICE Trans. Inf. & Syst., vol.E91-D, no. 12, pp. 2884-2887, Dec. 2008.

[75] K. Kanatani, Y. Sugaya, and H. Ackermann, “Uncalibrated Factorization Using a Variable Symmetric Affine Camera,” IEICE Trans. Inf. & Syst., E90-D, pp. 851 – 858, May 2007.

[76] M. Morimoto and K. Fujii, “A Flexible Gaze Detection Method Using Single PTZ Camera,” IEICE Trans D: Information; E90-D: 199 – 207, Jan. 2007.

[77] H. Y. Lin and J. H. Lin, “A Visual Positioning System for Vehicle or Mobile Robot Navigation,” IEICE Trans D: Information, E89-D, pp. 2109 - 2116, July 2006.

[78] Z. BIAN, H. ISHII, H. SHIMODA, and M. IZUMI, “Real-Time Tracking Error Estimation for Augmented Reality for Registration with Linecode Markers,” IEICE Trans. Inf. & Syst., vol.E91-D, no. 7, pp. 2041-2050, July 2008.

[79] C. C. Yang, M. M. Marefat, and F. W. Ciarallo, “Error Analysis and Planning Accuracy for Dimensional Measurement in Active Vision Inspection,” IEEE Trans. Robot.

Automat., vol. 14, pp. 476-487, 1998.

[80] B. Kamgar-Parsi and B. Kamgar-Parsi, “Evaluation of Quantization Error in Computer Vision,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 11, no. 9, pp.

929-939, 1989.

[81] H. Takahashi, D. Ukishima, K. Kawamoto, and K. Hirota, “A Study on Predicting Hazard Factors for Safe Driving,” IEEE Trans. Ind. Electron., vol. 54, no. 2, pp.

781-789, Apr. 2007.

[82] M. Wada, K. S. Yoon, and H. Hashimoto, “Development of advanced parking assistance system,” IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 4-17, Feb. 2003.

[83] G. Ogawa, K. Kise, T. Torii, and T. Nagao, “Onboard Evolutionary Risk Recognition System for Automobiles—Toward the Risk Map System,” IEEE Trans. Ind. Electron., vol. 54, no. 2, pp. 878-886, Apr. 2007.

[84] T. Bucher, C. Curio, J. Edelbrunner, C. Igel, D. Kastrup, I. Leefken, G. Lorenz, A.

Steinhage, and W. V. Seelen, “Image processing and behavior planning for intelligent vehicles,” IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 62-75, Feb. 2003.

[85] A. Kuma, “Computer-Vision-Based Fabric Defect Detection: A Survey,” IEEE Trans.

Ind. Electron., vol. 55, no. 1, pp. 348-363, Jan. 2008.

[86] S. Kim, J. H. Park, S. I. Cho, S. Park, K. Lee, and K. Choi, “Robust Lane Detection for Video-Based Navigation Systems,” in Proc. the 19th IEEE International Conference on Tools with Artificial Intelligence, pp. 535-538, Oct. 2007.

[87] Y. Wang, E. Teoh, and D. shen, “Lane detection and tracking using B-Snake,” Image Vis. Comput., vol. 22, no. 4, pp. 269-280, Apr. 2004.

[88] D. Heng, R. Oruganti, and D. Srinivasan, “Neural Controller for UPS Inverters Based on B-Spline Network,” IEEE Trans. Ind. Electron., vol. 55, no. 2, pp. 899-909, Feb.

2008.

[89] Z. Lin; D. S. Reay, B. W. Williams, and X. He, “Online Modeling for Switched Reluctance Motors Using B-Spline Neural Networks,” IEEE Trans. Ind. Electron., vol.

54, no. 6, pp. 3317-3322, Dec. 2007.

[90] Y. Wang, E. K. Teoh, and D. Shen, “Lane Detection Using B-Snake,” in Proc. IEEE Trans. on Information Intelligence and Systems, pp. 438-443, 31 Oct.-3 Nov. 1999.

[91] K. Szabat and T. Orlowska-Kowalska, “Performance Improvement of Industrial Drives With Mechanical Elasticity Using Nonlinear Adaptive Kalman Filter,” IEEE Trans. Ind.

Electron., vol. 55, no. 3, pp. 1075-1084, Mar. 2008.

[92] M. Chueh, Y. L. W. Au Yeung, K.-P.C. Lei, and S. S. Joshi, “Following Controller for Autonomous Mobile Robots Using Behavioral Cues,” IEEE Trans. Ind. Electron., vol.

55, no. 8, pp. 3124-3132, Aug. 2008.

[93] L .Fletcher, N. Apostoloff, L. Petersson, A. Zelinsky, “Vision in and out of Vehicles,”

IEEE Transactions on Intell. Syst. Vol. 18, pp. 12-17, May-June 2003.

[94] Takeo Kato, Yoshiki Ninomiya, Ichiro Masaki, “Preceding Vehicle Recognition Based on Learning From Sample Images,” IEEE Transactions on. Intell. Trans. Syst., Vol. 3, No. 4, pp. 252-260, Dec. 2002.

[95] D.M. Gavrila, U. Franke, C. Wohler, S. Gorzig, “Real-Time Vision for Intelligent Vehicles,” IEEE Instrumentation & Measurement Magazine , Vol. 4, pp. 22 – 27, June 2001.

[96] Nelson H., C. Yung, Chan Ye, “An Intelligent Mobile Vehicle Navigator Based on Fuzzy Logic and Reinforcement Learning,” IEEE Transactions on Syst., Man, Cybern.

Part B: Cybernetics, Vol. 29, No.2, pp. 314-321, April 1999.

[97] S. Sugimoto, H. Tateda, H. Takahashi, M. Okutomi, “Obstacle Detection Using Millimeter-wave Radar and Its Visualization on Image Sequence,” in Proc. ICPR, pp.

342 – 345, 2004.

[98] R. Labayrade, J. Douret, J. Laneurit, and R. Chapuis, “A Reliable and Robust Lane Detection System Based on the Parallel Use of Three Algorithms for Driving Safety Assistance,” IEICE Trans. Inf. & Syst., vol.E89-D, no. 7, pp. 2092-2100, July 2006.

V ITA

博 士 生:林全財(Chuan-Tsai Lin) 指導教授:吳炳飛(Bing-Fei Wu)

論文題目:影像處理與電腦視覺技術應 用於駕駛輔助系統之研究(A Study of Image Processing and Computer Vision Techniques for Driving Assistance Systems)

學歷

1. 79 年 9 月~83 年 6 月 國立彰化師範大學工業教育學系 2. 87 年 9 月~89 年 6 月 國立中正大學電機工程研究所

3. 92 年 9 月~ now 國立交通大學電機與控制工程學系博士班

經歷

1. 83 年 8 月~88 年 7 月 國立基隆海事職業學校電子通信科 教師 2. 84 年 7 月~86 年 6 月 空軍少尉軍官(預官)

3. 89 年 8 月~94 年 7 月 國立彰化師範大學附屬高工電子科 教師兼電子科主任

3. 89 年 8 月~94 年 7 月 國立彰化師範大學附屬高工電子科 教師兼電子科主任

相關文件