• 沒有找到結果。

Chapter 6 Vehicle Guidance on Sidewalks by Curb Following

6.2 Proposed Obstacle Detection and Avoidance Process

6.2.4 Proposed method for obstacle avoidance

In this section, we summarize the different techniques mentioned previously into two algorithms for obstacle avoidance. Algorithm 6.4 is a decision procedure before avoidance of obstacles. Algorithm 6.5 is a procedure for avoidance of obstacles.

Algorithm 6.4. Preparation for obstacle avoidance.

Input: input image Iinput. Output:

Steps:

Step 1. Detect obstacles in Iinput mentioned in Section 6.2.2.

Step 2. Compute the outlines in the top portion FT and the bottom portion FB of possible obstacle features mentioned in Section 6.2.2.

Step 3. Calculate the range data of FT and FB, respectively, as mentioned in Section 6.2.2.

Step 4. Compute D0 from FB as mentioned in Section 6.2.2.

Step 5. Go Algorithm 6.5 if D0 is near enough (the vehicle is close to the obstacle).

Algorithm 6.5. Avoidance of obstacle.

Input: a set of the range data R of the obstacle and the path nodes.

Output: avoidance points AP1, AP2 and a virtual node NVD for obstacle avoidance.

Steps:

Step 1. Compute the 3D information of the obstacle by subtracting FB from FT by Equations (6.1) and (6.2).

Step 2. Go to Step 4 without creating the avoidance points (Step 5 to Step 6) if the obstacle is too short to block the navigation path; Else, go to Step 3.

Step 3. Compute the avoidance points AP by the following steps:

(a) compute the vector of the obstacle perpendicular to the direction of the

(c) compute the avoidance point by:

AP1 = OL  Wvehicle×xu

; AP2 = OR  Wvehicle×xu

. (6.5)

Step 4. Compute the location of the virtual node NVD by Equations (6.3) and (6.4).

Step 5. Compute the distances d1 and d2 of two avoidance paths from the current position to NVD through AP1 and AP2, respectively.

Step 6. Choose an avoidance path from d1 or d2 with the smaller distance.

Step 7. Determine whether the avoidance path is available and whether NVD should be changed by Algorithm 6.3.

Chapter 7

Experimental Results and Discussions

7.1 Experimental Results

In this chapter, we will show some experimental results of the proposed system in an outdoor environment. Figure 7.1 shows the experimental environment.

In the learning procedure, the system will follow the curbstone in the navigation.

Figure 7.2(a) shows the curbstone in front of the vehicle and Figure 7.2(b) shows the resulting image of feature extraction. Figure 7.3(a) shows the curbstone at the lateral of the vehicle and Figure 7.3(b) shows the resulting image of feature extraction. If the system detects a human hand in the pre-defined region of the camera enclosure, it will change the guide line detection mode to the blind navigation mode. Figure 7.4(a) shows that the user instructs the vehicle by hand poses and Figure 7.4(b) shows the resulting image that was taken when the system was detecting a human hand. A map created in the learning procedure before executing the path planning process is shown in Figure 7.5(a) and Figure 7.5(b) shows the map modified after executing the path planning process.

Figure 7.1 The experimental environment.

(a)

(b)

Figure 7.2 A curbstone appearing in front of the vehicle. (a) A captured image. (b) An image obtained from processing (a) with extracted feature points.

(a)

(b)

Figure 7.3 A curbstone appearing at the lateral of the vehicle. (a) A captured image. (b) An image obtained from processing (a) with extracted feature points.

(a)

(b)

Figure 7.4 A resulting image of hand pose detection. (a) A user instructing the vehicle by hand. (b) The human hand detected in a pre-defined region.

(a)

(b)

Figure 7.5 Two navigation maps. (a) A map created before path planning. (b) A map obtained from modifying (a) after path planning.

In the navigation procedure for guiding a blind person, the system synchronizes its speed with the person’s speed. The signals captured from the six sonar sensors are shown in Figure 7.6. The synchronization method in the system used the sonar signals to compute the distance between the vehicle and the user. When an obstacle is detected, the system will avoid the obstacle if it is not flat and blocks the navigation path. An experimental result which shows that the system detected an obstacle is shown in Figure 7.7.

7.2 Discussions

By analyzing the experimental results of the learning procedure and the navigation procedure, we see some problems. First, in guide line detection, we use the sidewalk color to extract the guide line. If there are too many colors on the sidewalk, the system will be confused and cannot decide the features to use for guiding the vehicle. In this case, we have to learn all the colors as features. Also, the adjustment of the vehicle speed should be more quickly and smoothly. This can be done by using a faster CPU. Furthermore, because of the use of the plastic camera enclosure, light reflection yielded by the enclosure creates undesired lighting spots in the omni-image, as shown by as illustrated by Figure 7.8. This hopefully may be improved in future designs of the camera system. Finally, more experiments may be conducted to test the system in different environments.

Figure 7.6 Sonar signals obtained when a person stands behind the vehicle.

(a) (b)

(c) (d)

Figure 7.7An experimental result of an obstacle avoidance process. (a) An obstacle in front of the vehicle. (b) A side view of (a). (c) The vehicle avoiding the obstacle. (d) A side view of (c). (e) Obstacle detection result using a captured image.

(e)

Fig. 7.7 An experimental result of an obstacle avoidance process. (a) An obstacle in front of the vehicle. (b) A side view of (a). (c) The vehicle avoiding the obstacle. (d) A side view of (c). (e) Obstacle detection result using a captured image (cont’d).

Figure 7.8 A light pollution on the omni-image.

Chapter 8

Conclusions and Suggestions for Future Works

8.1 Conclusions

In the study, we have designed an autonomous vehicle system equipped with a newly-designed two-mirror omni-camera as the visual sensor and navigating on sidewalks for use as a guide dog. Several techniques for implementing such a system have been proposed.

First, we have derived a new formula to design the two-mirror omni-camera which is composed of two reflection mirrors with hyperboloidal shapes and a traditional projective camera. The formula describes the relationship between the parameters of the mirror surface and the position of the mirror with respect to the camera. People can use the general formula to produce the two-mirror omni-camera with different parameters easily.

Next, we have proposed several new techniques for calibration of the camera and the mechanical error of the autonomous vehicle. The camera calibration technique is based on a pano-mapping technique proposed by Jeng and Tsai [22]. A mapping table which describes the relationship between the pixels and the elevation angles with respect to the hyperbolic mirror has been created and used in object localization and 3D data computation. To calibrate the mechanical error of the odometer equipped in the vehicle, a calibration model based on the curve fitting technique has been proposed.

The mechanical error is reduced by the use of the proposed calibration model.

Furthermore, we proposed new techniques for vehicle guidance in the learning

procedure and in the navigation procedure. To learn environment information, a semi-automatic method based on the line following technique has been proposed. The vehicle navigates on sidewalks by using the features of the curbstone and the technique of line following. If there exits no special feature or if the features are not easy to extract, a new human interaction technique based on human hand pose detection has been proposed to solve the problems. Different pre-defined hand poses on the camera enclosure are decoded to issue commands to guide the vehicle if it is necessarily.

To adapt to varying light intensities in outdoor environments, two techniques, called dynamic exposure adjustment and dynamic thresholding adjustment, have been proposed. Also, to create an environment map, a path planning technique has been proposed, which identifies path nodes at critical spots on a learned navigation route to create a navigation map.

To navigate in the environment with the path node map, a new technique is used to avoid dynamic obstacles on the navigation path without navigating out of the guide line on the sidewalk. To guide a person in the environment, a sonar signal processing method for synchronization between the speed of the vehicle and that of the person has been proposed. Also proposed is a technique for computing the location of the vehicle in the environment.

Good experimental results show the feasibility of the proposed system.

8.2 Suggestions for Future Works

According to our experience obtained in this study, several suggestions for future works are listed in the following.

1. The idea of the proposed calibration method can be used for other type of

the stereo camera.

2. Equipping the two-mirror omni-camera in the different directions, such as down-looking, on the autonomous vehicle may be tried.

3. Using different features, such as trees, road lights, and special signboards, to guide the autonomous vehicle navigating in the outdoor environment may be attempted.

4. Developing different human interface techniques by using different features such as human body, human motions, and human faces, etc. may be conducted.

5. Designing new algorithms to compute 3D range data of objects more quickly is worth study.

References

[1] J. Borenstein and I Ulrich, “The GuideCan - A Computerized Travel Aid for the Active Guidance of Blind Pedestrians,” Proceedings of the IEEE International Conference on Robotics and Automation, Albuquerque, NM, Apr.

21-27, 1997, pp. 1283-1288.

[2] C. C. Sun and M. C. Su, “A Low-Cost Travel-Aid for the Blind,” M. S. Thesis, Department of Computer Science and Information Engineering, National Central University, Jhongli, Taoyuan, Taiwan, June 2005.

[3] S. Tachi and K. Komority, “Guide dog robot,” 2nd Int. Congress on Robotics Research, pp. 333-340. Kyoto, Japan, 1984.

[4] The Robot World.

http://www.robotworld.org.tw/

[5] National Yunlin University of Science and Technology.

http://www.swcenter.yuntech.edu.tw/

[6] J. Kannala and S. Brandt, “A Generic Camera Calibration Method for Fish-Eye Lenses,” Proceedings of the 17th International Conference on Pattern Recognition, Vol. 1, pp. 10-13, August 2004; Cambridge, U.K.

[7] C. J. Wu, “New Localization and Image Adjustment Techniques Using Omni-Cameras for Autonomous Vehicle Applications,” Ph. D. Dissertation, Institute of Computer Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, July 2009.

[8] S. K. Nayar, “Catadioptric Omni-directional Camera,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 482-488, June 1997, San-Juan, Puerto Rico.

[9] S. Baker and S. K. Nayar, “A Theory of Single-Viewpoint Catadioptric Image

Formation,” International Journal of Computer Vision, Vol. 35, No. 2, pp.

175-196, November 1999.

[10] H. Ukida, N. Yamato, Y. Tanimoto, T. Sano and H. Yamamoto, “Omni-directional 3D Measurement by Hyperbolic Mirror Cameras and Pattern Projection,”

Proceedings of IEEE International Instrumentation and Measurement Technology Conference, Victoria, Vancouver Island, Canada, May12-15, 2008.

[11] Z. Zhu, “Omnidirectional Stereo Vision,” 10th IEEE ICAR, August 22-25, 2001, Budapest, Hungary.

[12] L. He, C. Luo, F. Zhu, Y. Hao, J. Ou and J. Zhou, “Depth Map Regeneration via Improved Graph Cuts Using a Novel Omnidirectional Stereo Sensor,”

Proceedings of 11th IEEE International Conference on Computer Vision (ICCV 2007), Rio de Janeiro, Oct. 14-21, pp 1-8.

[13] S. Yi and N. Ahuja, “An Omnidirectional Stereo Vision System Using a Single Camera,“ Proceedings of 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, Aug. 20-24, 2006.

[14] G. Jang, S. Kim and I. Kweon, “Single Camera Catadioptric Stereo System,”

Proceeding of Workshop on Omnidirectional ,Vision, Camera Networks and Nonclassical cameras(OMNIVIS2005), 2005.

[15] K. C. Chen and W. H. Tsai, “A study on autonomous vehicle navigation by 3D object image matching and 3D computer vision analysis for indoor security patrolling applications,” Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, June 2007.

[16] J. Y. Wang and W. H. Tsai, “A Study on Indoor Security Surveillance by Vision-based Autonomous Vehicles with Omni-cameras on House Ceilings,” M.

S. Thesis, Institute of Computer Science and Engineering, National Chiao Tung

[17] S. Y. Tsai and W. H. Tsai, "Simple automatic path learning for autonomous vehicle navigation by ultrasonic sensing and computer vision techniques,"

Proceedings of 2008 International Computer Symposium, vol. 2, pp. 207-212, Taipei, Taiwan, Republic of China.

[18] K. T. Chen and W. H. Tsai, "A study on autonomous vehicle guidance for person following by 2D human image analysis and 3D computer vision techniques,"

Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, Republic of China.

[19] M. F. Chen and W. H. Tsai, "Automatic learning and guidance for indoor autonomous vehicle navigation by ultrasonic signal analysis and fuzzy control techniques," Proceedings of 2009 Workshop on Image Processing, Computer Graphics, and Multimedia Technologies, National Computer Symposium, pp.

473-482, Taipei, Taiwan, Republic of China.

[20] Y. T. Wang and W. H. Tsai, “Indoor security patrolling with intruding person detection and following capabilities by vision-based autonomous vehicle navigation,” Proceedings of 2006 International Computer Symposium (ICS 2006) – International Workshop on Image Processing, Computer Graphics, and Multimedia Technologies, Taipei, Taiwan, Republic of China, December 2006.

[21] K. L. Chiang and W. H. Tsai, “Security Patrolling and Danger Condition Monitoring in Indoor Environments by Vision-based Autonomous Vehicle Navigation,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2005.

[22] S. W. Jeng and W. H. Tsai, "Using pano-mapping tables to unwarping of omni-images into panoramic and perspective-view Images," Proceeding of IET Image Processing, Vol. 1, No. 2, pp. 149-155, June 2007.

[23] J. Gluckman, S. K. Nayar and K. J. Thoresz, “Real-Time Omnidirectional and

Panoramic Stereo,” Proceeding of Image Understanding Workshop, vol. 1, pages 299–303, 1998.

[24] The MathWorks.

http://www.mathworks.com/access/helpdesk/help/toolbox/images/f8-20792.html

[25] The Dimensions of Colour by David Briggs.

http://www.huevaluechroma.com/093.php

[26] M. C. Chen and W. S. Tsai, “Vision-based security patrolling in indoor environments using autonomous vehicles,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2005.