• 沒有找到結果。

Chapter 4 Obstacle Detection Algorithm

4.5 Distance Measurement

For backing up maneuver safety, the position of objects in rear view which are driver concerning with during parking period. Among of all obstacle information the distance between target and ego-vehicle is most essential. By acquiring the distance, we can realize objects are near or far away from our ego-vehicle. Then some backing-up crashes could be avoided. The distance which we want to know is the closest position of obstacle to our vehicle because of the nearest collision would occur

47

here. Therefore, the distance between the lowest position of each obstacle which is located by the obstacle localization procedure and our vehicle will be estimated by our procedure.

Fig. 4-23 Transformation between image coordinate and world coordinate

As shown in Fig. 4-23, all the obstacle positions can be transformed from the image coordinate to world coordinate by IPM calibration procedure. Therefore, the first step of distance measurement is to obtain the world coordinate of each target.

Then the world coordinate will be transformed to real distance by scaling. In order to estimate the scale between real length and world coordinate, the horizontal and vertical reference line on the ground should be measured as depicted in Fig 4-24. By calculating the proportion between reference line in real scene and in the world coordinate, the scale of horizontal and vertical will be acquired to undertake the mapping between world coordinate and real length. The Fig. 4-25 is illustrated the flow of distance measurement, that is transforming the image coordinate of target to world coordinate first, then mapping the world coordinate to real distance. The distance how far away our vehicle is obtained.

48

Fig. 4-24 Scale measure between world coordinate and real length

Image coordinate

World

coordinate Distance

Fig. 4-25 flow of distance measurement

49

Chapter 5

Experimental Results

5.1 Experimental Environments

Due to our research is to develop a vision-based obstacle detection algorithm, we mounted a CCD camera on back of vehicle with a fixed height and tilt angle. The environment of camera setup is shown in Fig. 5-1. Then our algorithm was implemented on the platform of PC with Intel Core2 Duo 2.2GHz and 2GB RAM.

Borland C++ Builder is our developing tool and operated on Windows XP. All of our testing inputs are uncompressed AVI video files. The resolution of video frame is 320*240.

Fig. 5-1 Environment of camera setup

5.2 Experimental Results of Obstacle detection

In this section, several results of obstacle detection will be presented. We will show the experimental results of the proposed algorithm on some conditions which could happen during backing up period such as pedestrian waking or stop on vehicle’s

50

driving path, or some common parking situations. In the following, the columns of left side contain original video sequence and the columns of right side contain detection results of the proposed algorithm. We use “red” block to represent the target position and mark a value to indicate the distance between target and our vehicle. The upper horizontal red line is to indicate the detection range which is below the line in the image. Because of the geometrical characteristic of IPM, the detection range is limited to the region under the horizon, and it will change the position of line with the different camera setup information (such as height or tilt angle of camera).

Fig. 5-2 illustrates a scenario that is a pedestrian stand or squat on the vehicle’s driving path. The results are show that the static pedestrian could be detected no matter standing or squatting by a lower height.

51

Fig. 5-2 Pedestrian stop on vehicle’s driving path

Fig. 5-3 illustrates an example a pedestrian crosses a vehicle’s path when vehicle is moving straight. Passing pedestrian can be detected correctly.

Fig. 5-3 Pedestrian crossing on vehicle’s driving path

52

Fig. 5-4 shows an example of a typical parking situation, there is a vehicle stopping on the side of parking space. Stopping vehicle can be detected correctly when driver is parking.

Fig. 5-4 A typical parking situation

Fig. 5-5 illustrates a multiple objects condition when backing up a car, there is a vehicles stopping on the side of parking space and a pedestrian squatting in the parking space for a lower height which driver could ignore commonly. The proposed system can detect them simultaneously.

53

Fig. 5-5 Multiple objects in parking situation

In Fig. 5-6 we test our system in a low contrast environment which is in the evening, and that is commonly occurring in the parking condition. Fig. 5-7 is also in a low contrast environment and interfered by brake lights furthermore.

Our system wouldn’t be affected by the external environment and obstacle region can be detected correctly.

Fig. 5-6 A low contrast environment when backing up

54

Fig. 5-7 A low contrast environment and interfered by brake lights when backing up

5.3 Accuracy Evaluation

5.3.1 Compensation Evaluation

In order to verify the accuracy of the proposed ground movement estimation technique, an experiment is designed to check compensation of ground movement.

The first task we should accomplish is to establish ground truth manually that can assist us to evaluate the results. Due to evaluate the compensation of ground movement, we marked three ground points for each frame and corresponding positions in previous frame. As shown in Fig. 5-8, the three ground points marked between consecutive frame images.

55

Previous frame Current frame

Fig. 5-8 Diagram of ground truth building

When the ground truth is established, for each ground point in previous frame we utilize the proposed algorithm to obtain compensated position. If ground movement information is accuracy, the compensated position should be identical to position in current frame of ground truth. Therefore, utilizing Eq. (5.1) to calculate compensation error, that is to indicate the error distance between actual position and compensated position. Then we implement an approach as introducing in [6] and calculate the experimental results. The comparison results are presented in Table 5-1 by testing 514 ground truth data. In addition, we calculate the compensation error in real distance and bird’s view image which is scaled to 320*240 separately. The proposed approach outperforms the method in [6] with average compensation error of 5.5 cm and 11.2 cm in real scene.

( _ , _ )

Compensation error =dist current position compensated position (5.1)

Compensation error Avg.

Proposed method [6]

Bird’s view image (pixel) 0.9 1.82

Real distance (cm) 5.5 11.2

Table 5-1 Comparison results of compensation error

56

5.3.2 Accuracy Evaluation of Obstacle detection

In order to assess the performance of the proposed obstacle detection algorithm it is necessary to test the system on many different conditions corresponding as much as possible to situations that one may find when backing up in real life situations. We developed a group of scenarios that are used when evaluating the system, such as a pedestrian crossing or stopped in the vehicle’s driving path, or some commonly parking situations with some objects in the parking space when daytime or evening.

Then we verify the effectiveness of the proposed system with an event-based method that is to consider the detection rate and false alarm rate by count how many events have to trigger in every frame. The detection rate and the false alarm rate are calculated as follow:

In this experiment, a total of 851 frames of images were extracted from seven daytime and evening situations and there are 813 obstacle spots that need to be extracted. The experimental results are shown in Table 5-2, and the detection rate is 86.7% false alarm rate is 2.5%. Thus, the experimental result demonstrates the effectiveness of the proposed technique.

false detected correct detected

False Alarm Rate N 100%

= N ×

non-detected obstacle

Failed Detection Rate N 100%

= N ×

57

Correct detection False detection Failed detection

Obstacle spots 705 18 108

Rate (%) 86.7% 2.5% 13.3%

Table 5-2 Accuracy evaluation of proposed obstacle detection

Then we implement an approach as introducing in [6] and calculate the experimental results. Table 5-3 shows the experimental results, and the detection rate is 78.5% false alarm rate is 24.8%. The proposed approach outperforms the method in [6] with detection rate especially in false alarm rate. This is because the method in [6]

utilized the concept of motion similarity to extract ground feature point and estimate ground movement, but in many cases the ground movement information is erroneous.

Therefore, the erroneous ground movement information cause many false alarming, and the detection rate is also decreased because of false position is localized in the lower spot caused by lane marking that obstacle position is missed. As depicted in Fig.

5-9, the erroneous ground movement can result in a large amount false alarm.

Correct detection False detection Failed detection

Obstacle spots 638 158 175

Rate (%) 78.5% 24.8% 21.5%

Table 5-3 Accuracy evaluation of [6]

Fig. 5-9 Result of erroneous ground movement

58

5.3.3 Accuracy Evaluation of Obstacle Distance

For evaluation of distance measurement, we compare length of real line with the length which estimated by the proposed procedure to obtain the distance measurement error. By testing 590 data such as illustrated in Fig. 5-10 the land marking, the error of distance measurement within short and long range respectively is shown in Table 5-4.

Due to the geometric characteristic of calibration procedure, the calibration error in the far range will be enlarged. Therefore, the average distance measure error of near range is about 0.16m but of far range is about 0.68m. The experimental results demonstrate that estimation for distance of target position is accurately.

Fig. 5-10 Land marking for distance measurement

Near range (3~5m) Far range (5~8m)

Average Distance Error (m) 0.16 m 0.68 m

Table 5-4 Experimental result of distance measurement

59

Chapter 6

Conclusions and Future Work

For generic obstacle detection, researchers have proposed many methods which focus on stereo vision. Compared to other researches we proposed a system which could automatic detect obstacle with only a single camera mounted on a moving vehicle. Besides, the movement of ego-vehicle which is generally acquired by external sensors such as odometer, we propose a ground movement estimation method that can only adopt image knowledge to obtain these information effectively.

In our research, we intend as obstacle any object that can obstruct the vehicle’s driving path or anything raise out significantly from the road surface. Therefore, we propose a ground movement compensation based approach to detect non-planar objects. In addition, adopting different characteristics between planar and non-planar object result from IPM to detect obstacle. The proposed ground movement estimation technique is employing road detection to assist in obtaining most useful ground features in the image, and analysis the principal distribution of optical flow of these feature points, the ground movement for compensation is obtained accurately. The accurately ground movement information can improve the performance of obstacle detection. Thus, the experimental results on many conditions which could occur during backing up period are already used to demonstrate the effectiveness of the proposed obstacle detection algorithm. Finally, we utilize calibration procedure to achieve distance measurement for every detected obstacle. By indicating information of obstacle distance, driver can realize objects are near or far away from our ego-vehicle. Besides, the estimation for distance of target position is verified by

60

practical measurement, our proposed method can achieve the accurately.

So far, the proposed obstacle detection algorithm can operate well in variant conditions during the backing up maneuver. However, a weak point of the proposed compensation based detection is the detection when vehicle is stationary. In the future, the work should be committed toward utilizing single frame to detect non-planar objects to improve the performance on a stationary scene.

61

References

[1] www.freeway.gov.tw/Publish.aspx?cnid=590&p=94 [2] www.iek.itri.org.tw

[3] http://www.motc.gov.tw/mocwebGIP/wSite/mp?mp=1

[4] NHTSA: ‘Vehicle backover avoidance technology study’, Report to Congress, November 2006

[5] M. Bertozzi, A. Broggi, and A. Fascioli, "Stereo inverse perspective mapping:

theory and applications," Image and Vision Computing, vol. 16, pp. 585-590, Jun 1998.

[6] M. Bertozzi and A. Broggi, "GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection," IEEE Transactions on Image Processing, vol. 7, pp. 62-81, Jan 1998.

[7] Wen-Liang Ji , "A CCD-Based Intelligent Driver Assistance System-Based on Lane and Vehicle Tracking, " National Cheng Kung University, PhD degree, 2005.

[8] P. Cerri and P. Grisleri, "Free Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images," in Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on, 2005, pp. 2223-2228.

[9] A. M. Muad, A. Hussain, S. A. Samad, M. M. Mustaffa, and B. Y. Majlis,

"Implementation of inverse perspective mapping algorithm for the development of an automatic lane tracking system," in TENCON 2004. 2004 IEEE Region 10 Conference, 2004, pp. 207-210 Vol. 1.

[10] S. Tan, J. Dale, A. Anderson, and A. Johnston, "Inverse perspective mapping and optic flow: A calibration method and a quantitative analysis," Image and Vision

62

Computing, vol. 24, pp. 153-165, Feb 2006.

[11] J. Gang Yi, C. Tae Young, H. Suk Kyo, B. Jae Wook, and S. Byung Suk, "Lane and obstacle detection based on fast inverse perspective mapping algorithm," in Systems, Man, and Cybernetics, 2000 IEEE International Conference on, 2000, pp. 2969-2974 vol.4.

[12] M. Nieto, L. Salgado, F. Jaureguizar, and J. Cabrera, "Stabilization of Inverse Perspective Mapping Images based on Robust Vanishing Point Estimation," in Intelligent Vehicles Symposium, 2007 IEEE, 2007, pp. 315-320.

[13] Ching-Chiuan Yang, "Construction of Fisheye Lens Inverse Perspective Mapping Model and Its Application of Obstacle Detection", National Chiao Tung University, Master degree, June 2008.

[14] Q. T. Luong, J. Weber, D. Koller, and J. Malik, "An integrated stereo-based approach to automatic vehicle guidance," in Computer Vision, 1995.

Proceedings., Fifth International Conference on, 1995, pp. 52-57.

[15] ONOGUCHI K.: "Shadow elimination method for moving object detection".

Proc. 14th Int. Conf. Pattern Recognition, Brisbane, Proc, Qld, Australia, August 1998, vol. 1, pp. 583–587

[16] W. Kruger, W. Enkelmann, and S. Rossle, "Real-time estimation and tracking of optical flow vectors for obstacle detection," in Intelligent Vehicles '95 Symposium., Proceedings of the, 1995, pp. 304-309.

[17] Guanglin Ma, Su-Birm Park, S. Miiller-Schneiders, A. Ioffe, A. Kummert,

"Vision-based Pedestrian Detection - Reliable Pedestrian Candidate Detection by Combining IPM and a 1D Profile," Intelligent Transportation Systems Conference, 2007. ITSC 2007. IEEE

[18] Guanglin Ma, Su-Birm Park, S. Miiller-Schneiders, A. Ioffe, A. Kummert, "A Real Time Object Detection Approach Applied to Reliable Pedestrian Detection,

63

" Proceedings of the 2007 IEEE Intelligent Vehicles Symposium Istanbul, Turkey, June 13-15, 2007

[19] Guanglin Ma, Su-Birm Park, S. Miiller-Schneiders, A. Ioffe, A. Kummert,

"Pedestrian detection using a singlemonochrome camera," Intelligent Transport Systems, IET, March 2009, pp. 42–56

[20] M. Bertozzi, A. Broggi, P. Medici, P. P. Porta and R. Vitulli, "Obstacle detection for start-inhibit and low speed driving", Intelligent Vehicles Symposium, 2005.

Proceedings. IEEE

[21] Changhui Yang, Hitoshi Hongo, and Shinichi Tanimoto, "A New Approach for In-Vehicle Camera Obstacle Detection by Ground Movement Compensation", Proceedings of the 11th International IEEE Conference on Intelligent Transportation Systems Beijing, China, October 12-15, 2008

[22] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17 (1981): 185–203.

[23] J.-Y. Bouguet et. al., “Pyramidal implementation of the Lucas kanade feature tracker description of the algorithm,” Intel Corporation, Microprocessor Research Labs, OpenCV document.

[24] Gary Bradski, Adrian Kaehler, Learning OpenCV Computer Vision with the OpenCV Library, O’REILLY

[25] C. Harris and M. Stephens, “A combined corner and edge detector,” Proceeding of the 4th Alvey Vision Conference (pp. 147–151), 1988.

[26] Shen-Chi Chen, “A New Method of Efficient Road Boundary Tracking Algorithm Based on Temporal Region Ratio and Edge Constraint”, National Chiao Tung University, Master degree, June 2009.

相關文件