• 沒有找到結果。

結論與未來研究方向

(Speeded Up Robust Features, SURF)偵測並產生敘述子,再以 RANSAC 來剔除離群質,

僅留下需要的特徵點,提高特徵點的穩定性。以最初影像序列中前兩張影像的移動,透 過本質矩陣分解來得到旋轉矩陣 R (Rotation Matrix)與位移向量 t (Translation Vector),

並且來用於恢復特徵的三維座標。獲得特徵點之三維與二維座標後進行 EPnP 演算法之

絕對的移動距離與單位,在未來研究方向中,可加入運算前的參考點(比例尺),讓系統 真正可得知絕對尺度。在移動估測的方面,可加入慣性測量單元(Inertial Measurement Unit),讓系統得知姿態變化,可輔助移動估測,提升準確度。

近年來全景相機(如 Ricoh Theta S 圖 5-1)發展盛行,其原理為使用視野(FOV)180 度的全周魚眼鏡頭(Circular Fisheye Lens),所能獲得的相片範圍極廣,能獲取更多的 特徵資訊,未來能使用此類鏡頭並運用在本論文所提出的單眼視覺本體移動估測甚至於 三維重建,使結果更臻完美。

另外本論文研究已獲得相機運動軌跡,未來可結合稠密點雲生成網格並貼上相機所 拍攝之彩色材質紋理,預期可獲得具有色彩紋理的大型三維物件重建模型,可廣泛應用 於大規模環場的資料擷取、防災救災應用、古蹟三維模型重建、生態與地質地理研究、

都市計畫與土地開發應用等等。

圖5-1 Ricoh Theta S 全景相機

參考文獻

[1] "3D Reconstruction," https://vimeo.com/90651206, [accessed Nov. 2015].

[2] "DJI,"http://www.dji.com/, [accessed Nov. 2016]

[3] D. Nister, O. Naroditsky and J. Bergen, "Visual Odometry," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2004, Vol.1, pp. 652-659, 2004.

[4] M. Irani, B. Rousso and S. Peleg, "Recovery of Ego-Motion using Image Stabilization," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1994, pp. 21–23, 1994.

[5] H. Moravec, "Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover," Ph.D. thesis, Univ. of Stanford, CA, 1980.

[6] Maimone, M.; Cheng, Y.; Matthies, L. (2007). "Two years of Visual Odometry on the Mars Exploration Rovers" . Journal of Field Robotics. 24 (3): 169–186.

doi:10.1002/rob.20184. Retrieved 2008-07-10.

[7] H. Durrant-Whyte and T. Bailey, "Simultaneous Localization and Mapping (SLAM): Part I. The Essential Algorithms," IEEE Robotics and Automation Society, Vol. 13, No. 2, pp.

99-110, 2006.

[8] H. Durrant-Whyte and T. Bailey, "Simultaneous Localisation and Mapping (SLAM): Part II. State of the Art," IEEE Robotics and Automation Society, Vol. 13, No. 3, pp. 108-117, 2006.

[9] B. Williams and I. Reid, "On Combining Visual SLAM and Visual Odometry," in Proceedings of the IEEE International Conference on Robotics and Automation 2010, pp.

3494-3500, 2010.

[10] P. J. Besl and N. D. McKay, "A Method for Registration of 3-D Shapes," IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp.239-256, 1992.

[11] W. L. D. Lui, T. J. J. Tang, T. Drummond and W. H. Li, "Robust Egomotion Estimation using ICP in Inverse Depth Coordinates," in Proceedings of the IEEE International Conference on Robotics and Automation 2012, pp. 1671-1678, 2012.

[12] A. Comport, E. Malis and P. Rives, "Accurate Quadrifocal Tracking for Robust 3d Visual Odometry," in Proceedings of the IEEE International Conference Robotics and Automation 2007, pp. 40-45, 2007.

[13] L. Wei, C. Cappelle, Y. Ruichek and F. Zann, "GPS and Stereovision-Based Visual Odometry: Application to Urban Scene Mapping and Intelligent Vehicle Localization,"

International Journal of Vehicular Technology, Vol. 2011, 2011.

[14] D. Scaramuzza and F. Fraundorfer, "Visual Odometry Part I: The First 30 Years and Fundamentals," IEEE Robotics and Automation Society, Vol. 18, No. 4, pp. 80-92, 2011.

[15] D. Scaramuzza and F. Fraundorfer, "Visual Odometry: Part II Matching, Robustness, Optimization, and Applications," IEEE Robotics and Automation Magazine, Vol. 19, No.

2, pp. 78-90, 2012.

[16] H. Badino, A. Yamamoto and T. Kanade, "Visual Odometry by Multi-frame Feature Integration," in Workshop on Computer Vision for Autonomous Driving

(Collocated with ICCV 2013), Sydney, Australia, 2013.

[17] "The KITTI Vision Benchmark Suite, " http://www.cvlibs.net/datasets/kitti/,[accessed Nov. 2016].

[18] P. S. Huang, "Automatic Calibration of LiDAR and Stereo Camera System for 3-D Reconstruction of Large Scale Scene," M. S. Thesis, Department of Computer Science and Information Engineering, National University of Kaohsiung, July 2013.

[19] J. H. Zhang, "Adaptive Feature Tracking Based on Epipolar Geometry and Disparity for Ego-Motion Estimation," M. S. Thesis, Department of Computer Science

and Information Engineering, National University of Kaohsiung, 2014.

[20] "ICAO's circular 328 AN/190 : Unmanned Aircraft Systems" . ICAO. Retrieved 3

February 2016.

[21] S. D. Cochran and G. Medioni, "3D Surface Description from Binocular Stereo," IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 14, No. 10, pp. 981-994, 1992.

[22] "Radial Distortion Correction," http://www.uni-koeln.de/~al001/radcor_files/hs100, [accessed Oct. 2016].

[23] Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330-1334, 2000.

[24] H. Bay, T. Tuytelaars, and L. V. Gool, "Surf: Speeded up robust features," in ECCV, pp.

404-417, 2006.

[25] D. Lowe, "Distinctive image features from scale-invariant keypoints,"International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, 2004

[26] L. Juan and O. Gwon, "A Comparison of SIFT, PCA-SIFT and SURF," International Journal of Image Processing, Vol. 3, No. 4, pp. 143-152, 2009.

[27] P. Viola, and M.Jones, "Rapid object detection using a boosted cascade of simple features," in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 511-518, 2001.

[28] "SURF,"http://www.cnblogs.com/tornadomeet/archive/2012/08/17/2644903.html, [accessed Sep. 2015].

[29] H. Badino and T. Kanade, "A head-wearable short-baseline stereo system for the simultaneous estimation of structure and motion," In IAPR Conference on Machine Vision Application, pp. 185-189, 2011.

[30] B. K. P. Horn, "Closed-form solution of absolute orientation using unit quaternions,"

Journal of the Optical Society of America A, Vol. 4, No. 4, pp. 629-642, 1987.

[31] B. Triggs, P. F. McLauchlan, R. I. Hartley and A. W. Fitzgibbon, "Bundle Adjustment — A Modern Synthesis," in Proceedings of the International Workshop on Vision Algorithms,

Vol. 1883, pp. 298-372, 2000.

[32] K. Levenberg, "A Method for the Solution of Certain Non-Linear Problems in Least Squares," Quarterly Journal of Applied Mathmatics, Vol. 2, No. 2, pp. 164-168, 1944 [33] S. W. Huang, "Integration of LIDAR and vision based approaches for textured 3D scene

reconstruction," M. S. Thesis, Department of Computer Science and Information Engineering, National University of Kaohsiung, July 2012.

[34] M. A. Fischler and R. C. Bolles, "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,"

Communications of the ACM, Vol.24, No. 6, pp. 381-395, 1981.

[35] Q. Zhang and R. Pless, "Extrinsic Calibration of a Camera and Laser Range Finder (Improves Camera Calibration)," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems 2004, Vol. 3, pp. 2301-2306, 2004.

[36] D. Scaramuzza, A. Harati and R. Siegwart, "Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes,” International Conference on Intelligent Robots and Systems, pp. 4164-4169, 2007.

[37] M. A. Fischler and R. C. Bolles, "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,"

Communications of the ACM, Vol.24, No. 6, pp. 381-395, 1981.

[38] D. Nister, “An efficient solution to the five point relative pose problem,” IEEE Trans.

Pattern Analysis Machine Intelligence, vol. 26, pp. 756–770, 2004.

[39] "GOOGLE Maps," https://maps.google.com.tw/, [accessed Nov. 2016].

[40] H.J. Chien, C.C Chuang, C.Y Chen, and R. Klette "When to Use What Feature? SIFT, SURF, ORB, or A-KAZE Features for Monocular Visual Odometry," in proc. Of Image and Vision Computing New Zealand 2016.

相關文件