• 沒有找到結果。

Chapter 7 Experimental Results and Discussions

7.2 Discussions

By analyzing the experimental results of the learning procedure and the navigation procedure, we see some problems. First, in guide line detection, we use the sidewalk color to extract the guide line. If there are too many colors on the sidewalk, the system will be confused and cannot decide the features to use for guiding the vehicle. In this case, we have to learn all the colors as features. Also, the adjustment of the vehicle speed should be more quickly and smoothly. This can be done by using a faster CPU. Furthermore, because of the use of the plastic camera enclosure, light reflection yielded by the enclosure creates undesired lighting spots in the omni-image, as shown by as illustrated by Figure 7.8. This hopefully may be improved in future designs of the camera system. Finally, more experiments may be conducted to test the system in different environments.

Figure 7.6 Sonar signals obtained when a person stands behind the vehicle.

(a) (b)

(c) (d)

Figure 7.7An experimental result of an obstacle avoidance process. (a) An obstacle in front of the vehicle. (b) A side view of (a). (c) The vehicle avoiding the obstacle. (d) A side view of (c). (e) Obstacle detection result using a captured image.

(e)

Fig. 7.7 An experimental result of an obstacle avoidance process. (a) An obstacle in front of the vehicle. (b) A side view of (a). (c) The vehicle avoiding the obstacle. (d) A side view of (c). (e) Obstacle detection result using a captured image (cont’d).

Figure 7.8 A light pollution on the omni-image.

Chapter 8

Conclusions and Suggestions for Future Works

8.1 Conclusions

In the study, we have designed an autonomous vehicle system equipped with a newly-designed two-mirror omni-camera as the visual sensor and navigating on sidewalks for use as a guide dog. Several techniques for implementing such a system have been proposed.

First, we have derived a new formula to design the two-mirror omni-camera which is composed of two reflection mirrors with hyperboloidal shapes and a traditional projective camera. The formula describes the relationship between the parameters of the mirror surface and the position of the mirror with respect to the camera. People can use the general formula to produce the two-mirror omni-camera with different parameters easily.

Next, we have proposed several new techniques for calibration of the camera and the mechanical error of the autonomous vehicle. The camera calibration technique is based on a pano-mapping technique proposed by Jeng and Tsai [22]. A mapping table which describes the relationship between the pixels and the elevation angles with respect to the hyperbolic mirror has been created and used in object localization and 3D data computation. To calibrate the mechanical error of the odometer equipped in the vehicle, a calibration model based on the curve fitting technique has been proposed.

The mechanical error is reduced by the use of the proposed calibration model.

Furthermore, we proposed new techniques for vehicle guidance in the learning

procedure and in the navigation procedure. To learn environment information, a semi-automatic method based on the line following technique has been proposed. The vehicle navigates on sidewalks by using the features of the curbstone and the technique of line following. If there exits no special feature or if the features are not easy to extract, a new human interaction technique based on human hand pose detection has been proposed to solve the problems. Different pre-defined hand poses on the camera enclosure are decoded to issue commands to guide the vehicle if it is necessarily.

To adapt to varying light intensities in outdoor environments, two techniques, called dynamic exposure adjustment and dynamic thresholding adjustment, have been proposed. Also, to create an environment map, a path planning technique has been proposed, which identifies path nodes at critical spots on a learned navigation route to create a navigation map.

To navigate in the environment with the path node map, a new technique is used to avoid dynamic obstacles on the navigation path without navigating out of the guide line on the sidewalk. To guide a person in the environment, a sonar signal processing method for synchronization between the speed of the vehicle and that of the person has been proposed. Also proposed is a technique for computing the location of the vehicle in the environment.

Good experimental results show the feasibility of the proposed system.

8.2 Suggestions for Future Works

According to our experience obtained in this study, several suggestions for future works are listed in the following.

1. The idea of the proposed calibration method can be used for other type of

the stereo camera.

2. Equipping the two-mirror omni-camera in the different directions, such as down-looking, on the autonomous vehicle may be tried.

3. Using different features, such as trees, road lights, and special signboards, to guide the autonomous vehicle navigating in the outdoor environment may be attempted.

4. Developing different human interface techniques by using different features such as human body, human motions, and human faces, etc. may be conducted.

5. Designing new algorithms to compute 3D range data of objects more quickly is worth study.

References

[1] J. Borenstein and I Ulrich, “The GuideCan - A Computerized Travel Aid for the Active Guidance of Blind Pedestrians,” Proceedings of the IEEE International Conference on Robotics and Automation, Albuquerque, NM, Apr.

21-27, 1997, pp. 1283-1288.

[2] C. C. Sun and M. C. Su, “A Low-Cost Travel-Aid for the Blind,” M. S. Thesis, Department of Computer Science and Information Engineering, National Central University, Jhongli, Taoyuan, Taiwan, June 2005.

[3] S. Tachi and K. Komority, “Guide dog robot,” 2nd Int. Congress on Robotics Research, pp. 333-340. Kyoto, Japan, 1984.

[4] The Robot World.

http://www.robotworld.org.tw/

[5] National Yunlin University of Science and Technology.

http://www.swcenter.yuntech.edu.tw/

[6] J. Kannala and S. Brandt, “A Generic Camera Calibration Method for Fish-Eye Lenses,” Proceedings of the 17th International Conference on Pattern Recognition, Vol. 1, pp. 10-13, August 2004; Cambridge, U.K.

[7] C. J. Wu, “New Localization and Image Adjustment Techniques Using Omni-Cameras for Autonomous Vehicle Applications,” Ph. D. Dissertation, Institute of Computer Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, July 2009.

[8] S. K. Nayar, “Catadioptric Omni-directional Camera,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 482-488, June 1997, San-Juan, Puerto Rico.

[9] S. Baker and S. K. Nayar, “A Theory of Single-Viewpoint Catadioptric Image

Formation,” International Journal of Computer Vision, Vol. 35, No. 2, pp.

175-196, November 1999.

[10] H. Ukida, N. Yamato, Y. Tanimoto, T. Sano and H. Yamamoto, “Omni-directional 3D Measurement by Hyperbolic Mirror Cameras and Pattern Projection,”

Proceedings of IEEE International Instrumentation and Measurement Technology Conference, Victoria, Vancouver Island, Canada, May12-15, 2008.

[11] Z. Zhu, “Omnidirectional Stereo Vision,” 10th IEEE ICAR, August 22-25, 2001, Budapest, Hungary.

[12] L. He, C. Luo, F. Zhu, Y. Hao, J. Ou and J. Zhou, “Depth Map Regeneration via Improved Graph Cuts Using a Novel Omnidirectional Stereo Sensor,”

Proceedings of 11th IEEE International Conference on Computer Vision (ICCV 2007), Rio de Janeiro, Oct. 14-21, pp 1-8.

[13] S. Yi and N. Ahuja, “An Omnidirectional Stereo Vision System Using a Single Camera,“ Proceedings of 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, Aug. 20-24, 2006.

[14] G. Jang, S. Kim and I. Kweon, “Single Camera Catadioptric Stereo System,”

Proceeding of Workshop on Omnidirectional ,Vision, Camera Networks and Nonclassical cameras(OMNIVIS2005), 2005.

[15] K. C. Chen and W. H. Tsai, “A study on autonomous vehicle navigation by 3D object image matching and 3D computer vision analysis for indoor security patrolling applications,” Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, June 2007.

[16] J. Y. Wang and W. H. Tsai, “A Study on Indoor Security Surveillance by Vision-based Autonomous Vehicles with Omni-cameras on House Ceilings,” M.

S. Thesis, Institute of Computer Science and Engineering, National Chiao Tung

[17] S. Y. Tsai and W. H. Tsai, "Simple automatic path learning for autonomous vehicle navigation by ultrasonic sensing and computer vision techniques,"

Proceedings of 2008 International Computer Symposium, vol. 2, pp. 207-212, Taipei, Taiwan, Republic of China.

[18] K. T. Chen and W. H. Tsai, "A study on autonomous vehicle guidance for person following by 2D human image analysis and 3D computer vision techniques,"

Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, Republic of China.

[19] M. F. Chen and W. H. Tsai, "Automatic learning and guidance for indoor autonomous vehicle navigation by ultrasonic signal analysis and fuzzy control techniques," Proceedings of 2009 Workshop on Image Processing, Computer Graphics, and Multimedia Technologies, National Computer Symposium, pp.

473-482, Taipei, Taiwan, Republic of China.

[20] Y. T. Wang and W. H. Tsai, “Indoor security patrolling with intruding person detection and following capabilities by vision-based autonomous vehicle navigation,” Proceedings of 2006 International Computer Symposium (ICS 2006) – International Workshop on Image Processing, Computer Graphics, and Multimedia Technologies, Taipei, Taiwan, Republic of China, December 2006.

[21] K. L. Chiang and W. H. Tsai, “Security Patrolling and Danger Condition Monitoring in Indoor Environments by Vision-based Autonomous Vehicle Navigation,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2005.

[22] S. W. Jeng and W. H. Tsai, "Using pano-mapping tables to unwarping of omni-images into panoramic and perspective-view Images," Proceeding of IET Image Processing, Vol. 1, No. 2, pp. 149-155, June 2007.

[23] J. Gluckman, S. K. Nayar and K. J. Thoresz, “Real-Time Omnidirectional and

Panoramic Stereo,” Proceeding of Image Understanding Workshop, vol. 1, pages 299–303, 1998.

[24] The MathWorks.

http://www.mathworks.com/access/helpdesk/help/toolbox/images/f8-20792.html

[25] The Dimensions of Colour by David Briggs.

http://www.huevaluechroma.com/093.php

[26] M. C. Chen and W. S. Tsai, “Vision-based security patrolling in indoor environments using autonomous vehicles,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2005.