• 沒有找到結果。

未來研究方向

在文檔中 中 華 大 學 (頁 64-70)

第五章 結論

5.2 未來研究方向

本篇論文的研究著重於高速運動的影像導航系統設計,有關本研究的未來發 展分成兩部份:

一、影像導航系統設計

1. 發展更精確的特徵點辨識演算法,目前本篇是對路面與牆壁的邊緣為特

徵線,未來希望可以更進一步對路面的障礙物或是移動物體做辨識,使 自走車的影像導航系統更為建全。

2. 未來可以希望減少影像處理上的運算,若偵測到的特徵線出現在畫面右 邊,就只對畫面右半部做處理,這樣可以大大提升運算上面的速度。

二、實車系統的設計

1. 在影像擷取方面,希望可以更換高速或是高倍畫素的攝影機,這樣擷取 的到影像將會更清晰、雜訊減少。

2. 由於室內環境過於複雜,所以若能加上其他測距的感測器來輔助,將會 提升車子本身的強健性。

參考文獻

[1] Gianluca Antonelli, Stefano Chiaverini, Roberto Finotello, and Riccardo Schiavon, “Real-Time Path Planning and Obstacle Avoidance for RAIS: An Autonomous Underwater Vehicle",

IEEE Journal of Oceanic Engineering, Vol.26, No.2, pp.216-227, 2001.

[2] Hideo Mori, Shinji Kotani, and Noriaki Kiyohiro, “A Robotic Travel Aid “HITOMI" ", Proceedings of the IEEE/RSJ/GI

International Conference on Intelligent Robots and Systems, Vol.3, pp.1716-1723, 1994.

[3] 徐肇鴻,導盲機器人之路徑規劃與控制,國立交通大學電機與控制 工程系碩士論文,1999年。

[4] T. Hessburg and M. Tomizuka, “Fuzzy Logic Control for Lateral Vehicle Guidance" IEEE Control Systems Magazine, Vol.14, Issue 4, pp. 55- 63, Aug. 1994.

[5] E. M. Petriu, “Automated Guided Vehicle with Absolute Encoded Guide-path," IEEE Transactions on Robotics and Automation, Vol.7, No.4, pp.562-565, 1991.

[6] J. Borenstein and Y. Koren, “Histogramic In-motion Mapping for Mobile Robot Obstacle Avoidance," IEEE Transactions on Robotics and Automation, Vol.7, No.4,pp.535-539, 1991.

[7] A. P. Tirumalai, B. G. Schunck and R. C. Jain, “Evidential Reasoning for Building Environment Maps," IEEE Transactions on Systems, Man, and Cybernetics, Vol.25, No.1, pp.10-20, 1995.

[8] D. Pagac, E. M. Nebot and D. W. Hugh, “An Evidential Approach to

Map-building for Autonomous Vehicles," IEEE Transactions on Robotics and Automation, Vol.14, No.4, pp.623-629, 1998.

[9] G. Beccari, S. Caselli, F. Zanichelli and A. Calafiore, “Vision-based Line Tracking and Navigation in Structured Environments," IEEE Computational Intelligence in Robotics and Automation, pp.406-411, 1997.

[10] F. M. Pan and W. H. Tsai, “Automatic Environment Learning and Path Generation for Indoor Autonomous Land Vehicle Guidance Using Computer Vision Techniques,"Proc. of Nat. Comput. Symp. '93, Chiayi, Taiwan, pp.311-321, 1993.

[11] W. H. Lee, K. S. Roh and I. S. Kweon, “Self- localization of Mobile Robot without Camera Calibration Using Projective Invariants,"

pattern recognition letters Vol.21, pp.45-60, 1999.

[12] Y. Yagi, H. Nagai, K.Yamazawa and M. Yachida, “Reactive Visual Navigation Based on Omnidirectional Sensing-path Following and Collision Avoidance," Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol.1, pp.58-63, 1999.

[13] Y. Yagi, S. Kawato and S. Tsuji, “Real-Time Omnidirectional Image Sensor (COPIS) for Vision-Guided Navigation,” IEEE Trans. Robotics and

Automation, Vol.10, No.1, pp.11-22, 1994.

[14] C. Roman and C. Reinholtz, “Robust Course-boundary Extraction Algorithms for Autonomous Vehicles," IEEE Intelligent Systems, pp.32-38, Nov. 1998.

[15] R. R. Murphy, “Sensor and Information Fusion for Improved Vision-based Vehicle Guidance," IEEE Intelligent Systems, pp.49-56, Nov. 1998.

[16] A. Ohya, A. Kosaka and A. Kak, “Vision-based Navigation by a Mobile Robot with Obstacle Avoidance using Single-Camera Vision and Ultrasonic sensing," IEEE Transactions on Robotics and Automatic, Vol.14, No.6, pp969-978, 1998.

[17] S. R. Kyoung and H. L. Wang, “Obstacle Detection and Self- localization without Camera Calibration using Projective

Invariant," Proc IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol.2, pp.1030-1035, 1997.

[18] G. S. Young, T. H. Hong, M. Herman and J. C. S. Yang, “New Visual Invariant for Obstacle Detection Using Optical Flow Induced from General Motion," Proc IEEE Workshop on Applications of Computer Vision, pp.100-109, 1992.

[19] T. Suzuki and T. Kanada, “Measurement of Vehicle Motion and Orientation Using Optical Flow," Proc IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems, pp.25-30, 1999.

[20] A. Giachetti, M. Campani and V. Torre, “The use of Optical Flow for Road Navigation," IEEE Trans. on Robotics and Automation, Vol.141, pp.34-48, 1998.

[21] X. Yuan, Z. Hu, J. Chen, R. Chen and P. Liu, “Online Learning and Object Recognition for AUV Optical Vision," Proc. IEEE

International Conference on System, Vol.6, pp.857-862, 1999.

[22] Y. K. Hwang and N. Ahuja, “A potential field approach to path planning," IEEE Transactions on Robotics and Automation, Vol.8, No.1, pp.23-32, Feb. 1992.

[23] Y. H. Liu and Suguru Arimoto, “Path planning using a tangent graph for mobile robots among polygonal and curved obstacles," Int. J.

Robotics Research, Vol.11, No.4, pp.376-382, Aug. 1992.

[24] J. C. Latmobe, Robot Motion Planning, Kluwer Academic Publishers, 1991.

[25] W. L. Xu and S. K. Tso, “Sensor-based Fuzzy Reactive Navigation of a Mobile Robot through Local Target Switching," IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews, Vol.29, No.3, pp.451-459, 1999.

[26] K. C. Ng and M. M. Trivedi, “A Neuro-fuzzy Controller for Mobile Robot Navigation and Multirobot Convoying," IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetic, Vol.28, No.6, pp.829-840, Dec. 1998.

[27] D. Silva, I. N. Gomide and D. Amaral, “Navigation of Mobile Robots Using Fuzzy Logic Controllers," IEEE International Workshop on Advanced Motion Control, pp.346-349, July 1998.

[28] B. Y. Chee, Y. T. Lang and W. T. Tse, “Fuzzy Mobile Robot Navigation and Sensor Integration," IEEE International Conference on Fuzzy Systems, Vol.1, pp.7-12, 1996.

[29] “TMS320C6000 CPU and Instruction Set Ref Guide," Literature Number: SPRZ168E, TEXAS INSTRUMENT, June 2004.

[30] “TMS320C64x Technical Overview," Literature Number:

SPRU395B, TEXAS INSTRUMENT, January 2001.

[31] “TMS320C6000 DSP External Memory Interface (EMIF) Reference Guide," Literature Number: SPRU266B, TEXAS INSTRUMENT, April 2004.

[32] “TMS320C6000 DSP General-Purpose Input/Output (GPIO) Reference Guide," Literature Number: SPRU584A, TEXAS

在文檔中 中 華 大 學 (頁 64-70)

相關文件