• 沒有找到結果。

Chapter 7  Experimental Results and Discussions

7.2  Discussions

By analyzing the experimental results of the vehicle navigation, we find some problems. Firstly, for sidewalk curb detection, we detect the specific curb with a red surface in the campus of National Chiao Tung University. More kinds of curb lines with different colors should be learned for the line following technique. Also, when dynamically adjusting the exposure to obtain an appropriate exposure value for conducting different landmark detection works, it may take some time to wait the camera system to adjust to the appointed exposure value. A possible way to solve this problem is to use another camera with quicker response time in the camera parameter adjustment process. Furthermore, the light reflection caused by the plastic camera enclosure creates in the omni-image also causes ill effects in image analysis. A possible solution is to learn these specific regions in advance and ignore them when conducting image processing. Finally, more experiments in different environments should also be conducted to test our system more thoroughly.

Chapter 8  

Conclusions and Suggestions for Future Works

8.1 Conclusions

A vision-based autonomous vehicle navigation system for use as a machine guide dog in outdoor environments has been proposed in this study. To implement such as a system, several techniques has been proposed.

At first, a method to train the vehicle system for the purpose of learning environment information has been proposed. By the pano-mapping technique proposed by Jeng and Tsai [25], we calibrate the two-mirror omni-camera used in this study by recoding the relationship between image pixels and real-world elevation and azimuth angles. Next, by a learning interface designed in this study, a trainer of the vehicle system can guide the vehicle to navigate on a sidewalk and construct a navigation map conveniently including the path nodes, alone-path landmarks, and relevant guidance parameters.

Next, a new space line detection technique based on the pano-mapping technique has been proposed. The space line with a curve projection on the omni-image can be detected by the use of analytic formulas and the Hough transform technique. In addition, for the vertical space line which exists in landmarks like light poles and hydrants, we can further compute its position directly according to omni-imaging and pan-mapping techniques.

Also, several landmark detection techniques have been proposed for conducting

vehicle navigation. Firstly, a curb line detection technique has been proposed for use to guide the vehicle on a safe path as well as to calibrate the odometer reading of the vehicle orientation. Next, hydrant and light pole detection techniques have been proposed. The vertical space lines found in these landmarks using the techniques can be used to localize the vehicle in the navigation process. Furthermore, to conduct the landmark detection works more effectively in outdoor environments, techniques for dynamic exposure and threshold adjustments have also been proposed, which can be employed to adjust the system’s parameters to meet different lighting conditions. Also have been proposed is a new obstacle detection technique, which can be used to find dynamic obstacles on the sidewalk for safer vehicle navigation. Specifically, by the use of a ground matching table, the vehicle can detect obstacles on the path and localize its position for realtime path planning to conduct an obstacle avoidance process automatically.

Good landmark detection results and successful navigation sessions on a sidewalk in a university campus show the feasibility of the proposed methods.

8.2 Suggestions for Future Works

According to our experience obtained in this study, in the following we point out some related interesting issues worth further investigation in the future:

(1) the proposed line detection may be adopted to detect and localize other kinds of landmarks with vertical line features;

(2) it is interesting to use different artificial or nature landmarks, such as a tree, a signboard, a pillar, or a building, to conduct vehicle navigation in outdoor environments;

(3) the curb line detection technique may be improved by learning features of other

(4) it is a challenge to develop additional techniques to guide the vehicle to pass crossroads, like recognizing traffic signals and following zebra crossings, etc.;

(5) it seems necessary to add the capability of warning the user in danger conditions;

(6) dynamic obstacles detection technique may be improved using other techniques such as template matching;

(7) it is desired to design a new camera system which owns a smaller size.

References

[1] S. Shoval, J. Borenstein, and Y. Koren, “The Navbelt - A computerized travel Aid for the blind,” Proceedings of the RESNA '93 Conference, pp. 240-242, Las Vegas, Nevada, USA, June 13-18, 1993.

[2] J. Borenstein and I Ulrich, “The GuideCan - A computerized travel aid for the Active guidance of blind pedestrians,” Proceedings of IEEE International Conference on Robotics and Automation, pp. 1283–1288, Albuquerque, NM, USA, Apr. 1997.

[3] H. Mori, and M. Sano, "A guide dog robot Harunobu-5 following a person,"

IEEE/RSJ International Workshop on Intelligent Robots and Systems, vol. 1, pp.

397-402, Osaka, Japan, 1991.

[4] Y. Z. Hsieh and M. C. Su, “A stereo-vision-based aid system for the blind,” M. S.

Thesis, Department of Computer Science and Information Engineering, National Central University, Jhongli, Taoyuan, Taiwan, June 2006.

[5] L. Ran, S. Helal, S. Moore, “Drishti: An integrated indoor/outdoor blind navigation system and service,” Proceedings of 2nd IEEE Annual Conference on Pervasive Computing and Communications, pp. 23-31, Orlando, Florida, USA, 2004.

[6] M. Kam, X. Zhu, and P. Kalata, “Sensor fusion for mobile robot navigation,”

Proceeding of the IEEE, vol. 85, no. 1, pp. 108-119, 1997.

[7] M. F. Chen and W. H. Tsai, "Automatic learning and guidance for indoor autonomous vehicle navigation by ultrasonic signal analysis and fuzzy control techniques," Proceedings of 2009 Workshop on Image Processing, Computer

[8] E. Abbot and D. Powell, "Land-vehicle navigation using GPS", Proceedings of the IEEE, vol. 87, no. 1, pp. 145-162, Jan. 1999.

[9] M. C. Chen and W. S. Tsai, “Vision-based security patrolling in indoor environments using autonomous vehicles,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, June 2005.

[10] K. L. Chiang and W. H. Tsai. “Vision-based autonomous vehicle guidance in indoor environments using odometer and house corner location information,”

Proceedings of 2006 IEEE International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIHMSP-2006), pp. 415–418, Pasadena, California, USA, Dec. 18-20, 2006.

[11] S. Y. Tsai and W. H. Tsai, "Simple automatic path learning for autonomous vehicle navigation by ultrasonic sensing and computer vision techniques,"

Proceedings of 2008 International Computer Symposium, vol. 2, pp. 207-212, Taipei, Taiwan, Dec. 2008.

[12] S. Pagnottelli, S. Taraglio, P. Valigi, and A. Zanela, “Visual and laser sensory data fusion for outdoor robot localisation and navigation,” Proceedings of 12th International Conference on Advanced Robotics, pp. 171-177, Seattle, Washington, USA, July 2005.

[13] Taiwan Foundation for the Blind:

http://www.tfb.org.tw/english/index.html.

[14] Taiwan Guide Dog Association:

http://www.guidedog.org.tw/.

[15] S. E. Yu and D. Kim, “Distance estimation method with snapshot landmark

[16] T. Tasaki, S. Tokura, T. Sonoura, F. Ozaki ,and N. Matsuhira, “Mobile robot self-localization based on tracked scale and rotation invariant feature points by using an omnidirectional camera”, Proceedings of 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5202-5207, Taipei, Taiwan, Oct. 18–22, 2010.

[17] C. J. Wu and W. H. Tsai, “Location estimation for indoor autonomous vehicle navigation by omni-directional vision using circular landmarks on ceilings,”

Robotics and Autonomous Systems, vol. 57, no. 5, pp. 546-555, May 2009.

[18] B. Siemiątkowska and R. Chojecki, “Mobile robot localization based on omnicamera,” European Conference 5TH IFAC/EURON Symposium on Intelligent Autonomous Vehicles (IAV2004), Lisbon, Portugal, July 5-7, 2004.

[19] J. Courbon, Y. Mezouar, and P. Martinet, “Autonomous navigation of vehicles from a visual memory using a generic camera model,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 3, pp. 392-402, Sept. 2009.

[20] A. Merke, S. Welker, and M. Riedmiller, “Line based robot localization under natural light conditions,” Proceedings of ECAI Workshop on Agents in Dynamic and Real Time Environments, pp. 402-409, Valencia, Spain, 2004.

[21] S. Kumar, “Binocular stereo vision based obstacle avoidance algorithm for autonomous Mobile Robots,” Proceedings of IEEE International Advance Computing Conference, pp. 254-259, Patiala, India, Mar. 2009.

[22] D. Fernandez and A. Price, “Visual detection and tracking of poorly structured dirt roads,” Proceedings of 12th International Conference on Advanced Robotics, pp. 553-560, Seattle, Washington, USA,July 2005.

[23] D. Kim, J. Sun, S. M. Oh, J. M. Rehg, and A. F. Bobick, “Traversability classification using unsupervised on-line visual learning for outdoor robot navigation,” Proceedings of the 2006 IEEE International Conference on Robotics and Automation, pp. 518–525, May 2006.

[24] Q. Mühlbauer, S. Sosnowski, T. Xu, T. Zhang, and K. Kühnlenz, M. Buss,

“Navigation through urban environments by visual perception and interaction,”

Proceeding of IEEE International Conference on Robotics and Automation, pp.

1907–1913, Kobe, Japan, 2009.

[25] S. W. Jeng and W. H. Tsai, "Using pano-mapping tables to unwarping of omni-images into panoramic and perspective-view Images," Proceeding of IET Image Processing, vol. 1, no. 2, pp. 149–155, June 2007.

[26] C. J. Wu and W. H. Tsai, "An omni-vision based localization method for automatic helicopter landing assistance on standard helipads," Proceedings of 2nd International Conference on Computer and Automation Engineering, vol. 3, pp. 327–332, Singapore, 2010.

[27] J. K. Huang and W. H. Tsai, “Autonomous vehicle navigation by two-mirror omni-directional imaging and ultrasonic sensing techniques,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, June 2010.

[28] K.C. Chen and W. H. Tsai, “A study on autonomous vehicle navigation by 2D object image matching and 3D computer vision analysis for indoor security patrolling applications,” Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, Aug. 2007.