• 沒有找到結果。

第五章、 結論與未來展望

5.2 未來展望

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

較差。經過實驗後結果顯示,發生特徵點匹配失敗的情況,有部份是受到 強烈的光照影響造成。

3. 精準降落之實驗及應用:當 AprilTag 識別失敗的情形下,能夠快速且有效 的執行錯誤回復動作,仍有改進空間。目前實驗場景只選定在單一地標建 築物來執行測試,外觀為大致 L 型的場域,讓無人機逐漸切入及接近,後 續希望能夠針對其他室外環境中的場景,像是其他不同形狀、樣貌的地形 結構,使用本研究提出的方法進行視覺導航。由貨物運送為例,可以設計 一個實驗任務,將無人機從某處帶著貨物起飛,經過飛行後到達指定的目 標降落建築物周圍,接著依序進行本研究的視覺導航,將貨物運送至指定 的標記點,完成降落任務。

Reference

[1] B.D.O. Anderson, C. Yu, B. Fidan, D. Van der Walle, “UAV Formation Control:

Theory and Application,” in Recent Advances in Learning and Control, New York:Springer-Verlag, pp. 15-34, 2008.

[2] T. Arnold, M. De Biasio, A. Fritz, and R. Leitner, “UAV-based measurement of vegetation indices for environmental monitoring,” in Proc. 7th Int. Conf. Sens.

Technol. (ICST), pp. 704–707, Dec. 2013.

[3] M. Silvagni, A. Tonoli, E. Zenerino and M. Chiaberge, “Multipurpose UAV for search and rescue operations in mountain avalanche events,” Geomatics Natural Hazards Risk, vol. 8, no. 1, pp. 18-33, 2017.

[4] Z. Wang, M. Zheng, J. Guo, and H. Huang, “Uncertain UAV ISR mission planning problem with multiple correlated objectives,” J. Intell., Fuzzy Syst., vol.

32, no. 1, pp. 321–335, Jan. 2017

[5] G. Brunner, B. Szebedy, S. Tanner and R. Wattenhofer, “The Urban Last Mile Problem: Autonomous Drone Delivery to Your Balcony,” 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, pp.

1005-1012, 2019.

[6] H. Lee, S. Jung and D. H. Shim, “Vision-based UAV landing on the moving vehicle,” 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, pp. 1-7, 2016.

[7] A. Bachrach, S. Prentice, R. He, and N. Roy, “Range-robust autonomous navigation in GPS-denied environments,” J. Field Robot., vol. 28, no. 5, pp.

644–666, 2011.

IEEE International Conference on Robotics and Automation, Kobe, pp. 2643-2648, 2009.

[9] K. Peng, “A secure network for mobile wireless service,” Journal of Information Processing Systems, vol. 9, no. 2, pp. 247–258, 2013.

[10] P. A. Zandbergen and L. L. Arnold, “Positional accuracy of the wide area augmentation system in consumer-grade GPS units,” Computers and Geosciences, vol. 37, no. 7, pp. 883–892, 2011.

[11] G. Xu, “GPS: Theory, Algorithms and Applications,” Springer, Berlin, G ermany, 2nd edition, 2007.

[12] G. Conte and P. Doherty, “An integrated UAV navigation system based on aerial image matching,” Proc. IEEE Aerosp. Conf., pp. 3142-3151, 2008.

[13] C. Forster, M. Faessler, F. Fontana, M. Werlberger, and D. Scaramuzza,

“Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 111–118, 2015.

[14] P. H. Nguyen, M. Arsalan, J. H. Koo, R. A. Naqvi, N. Q. Truong, and K. R. Park,

“LightDenseYOLO: A fast and accurate marker tracker for autonomous UAV landing by visible light camera sensor on drone, “ Sensors, vol. 18, no. 6, p. 1703, 2018.

[15] J. Wubben, F. Fabra, C. T. Calafate, T. Krzeszowski, J. M. Marquez-Barja, J. C.

Cano, and P. Manzoni, “Accurate landing of unmanned aerial vehicles using ground pattern recognition,” Electronics, 8, 1532, 2019.

[16] S. Hening, C. A. Ippolito, K. S. Krishnakumar, V. Stepanyan, and M. Teodorescu,

“3d lidar slam integration with gps/ins for uavs in urban gps-degraded environments,” in AIAA Information Systems-AIAA Infotech@ Aerospace, pp.

448–457, 2017.

[17] M. Pierzchała, P. Giguere, and R. Astrup, “Mapping forests using an unmanned ground vehicle with 3d lidar and graph-slam,” Computers and Electronics in Agriculture, vol. 145, pp. 217–225, 2018.

[18] J. Joglekar, S. S. Gedam and B. Krishna Mohan, “Image Matching Using SIFT Features and Relaxation Labeling Technique—A Constraint Initializing Method for Dense Stereo Matching,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 9, pp. 5643-5652, Sept. 2014.

[19] D. Lee, T. Ryan and H. J. Kim, “Autonomous landing of a VTOL UAV on a moving platform using image-based visual servoing,” 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, 2012, pp. 971-976, 2012.

[20] H. Choi, M. Geeves, B. Alsalam, and F. Gonzalez, “Open Source Computer-Vision Based Guidance System for UAVs On-Board Decision Making,” IEEE Aerospace conference, Big sky, Montana, 2016.

[21] S. G. Lin, M. A. Garratt, A. J. Lambert, “Monocular vision-based real-time target recognition and tracking for autonomously landing an UAV in a cluttered shipboard environment,” Autonomous Robots, 41(4): pp. 881–901, 2017.

[22] K. E.Wenzel, A. Masselli, and A. Zell, “Automatic Take Off , Tracking and Landing of a Miniature UAV on a Moving Carrier Vehicle,” Journal of

[23] M. Bryson and S. Sukkarieh, “Building a robust implementation of bearing-only inertial SLAM for a UAV,” J. Field Robot., vol. 24, no. 1/2, pp. 113-143, Jan./Feb. 2007.

[24] C. Kanellakis and G. Nikolakopoulos, “Survey on Computer Vision for UAVs:

Current Developments and Trends,” J. Intell. Robot. Syst., vol. 87, no. 1, pp.

141-168, Jan. 2017.

[25] F. Caballero, L. Merino, J. Ferruz, and A. Ollero, “Vision-based odometry and slam for medium and high altitude flying uavs,” Journal of Intelligent Robotic Systems, vol. 54, pp. 137-161, 2009.

[26] J. Engel, V. Koltun and D. Cremers, “Direct sparse odometry,” IEEE Trans.

Pattern Anal. Mach. Intell., vol. 40, no. 3,pp. 611-625,Mar.2018.

[27] E. Karami, S. Prasad and M. Shehata, “Image matching using sift, surf, brief and orb: Performance comparison for distorted images,” in Proc. Newfoundland Electr. Comput. Eng. Conf., St. John’s, NL, Canada, Nov.2015.

[28] David G Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,”

International Journal of Computer Vision, vol.50, No. 2, pp.91-110, 2004.

[29] Y. Li, N. Snavely, and D. P. Huttenlocher, “Location recognition using prioriti zed feature matching,” in Proc. European Conf. Computer Vision (ECCV), Cret e, Greece, Sept. 2010.

[30] Zhen Liu, Ziying Zhao, Yida Fan, Dong Tian, “Automated change detection of multi-level icebergs near Mertz Glacier region using feature vector matching,”

The international conference on Image processing, computer vision and Pattern Recogniton, 2013.

[31] R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: an Open-Source SLAM system for monocu lar, stereo and RGB-D cameras,” arXiv:1610.06475, Oct.2016.

[32] M. Colledanchise and P. Ogren, “Behavior trees in robotics and AI: an ¨ introduction,” CoRR, vol. abs/1709.00084, 2017.

[33] M. Colledanchise, D. Almeida, and P. Ogren, “Towards blended reactive planning and acting using behavior trees,” arXiv preprint arXiv:1611.00230, 11 2016.

[34] M. F. Sani and G. Karimian, “Automatic navigation and landing of an indoor ar.

drone quadrotor using aruco marker and inertial sensors,” in 2017 International Conference on Computer and Drone Applications (IConDA). IEEE, pp. 102–107, 2017.

[35] P. H. Nguyen, K. W. Kim, Y. W. Lee, and K. R. Park, “Remote marker-based tracking for uav landing using visible-light camera sensor,” Sensors, vol. 17, no.

9, p. 1987, 2017.

[36] S. A. K. Tareen and Z. Saleem, “A comparative analysis of sift, surf, kaze, akaze, orb, and brisk,” in 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–10, March 2018.

[37] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,”

Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.

[38] H. Zhou, Y. Yuan, and C. Shi, “Object tracking using SIFT features and mean shift,” J. Comput. Vision Image Understand., vol. 113, no. 3, pp. 345–352, Mar.

2009.

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

[39] S. Choi, J. Park, W. Yu, “Resolving Scale Ambiguity for Monocular Visual Odometry,” in Proc. IEEE URAI, pp. 604-608, 2013.

[40] O. Esrafilian and H. D. Taghirad, “Autonomous flight and obstacle avoidance of a quadrotor by monocular slam,” in 2016 4th International Conference on Robotics and Mechatronics (ICROM), pp. 240–245, Oct 2016.

相關文件