簡易檢索 / 詳目顯示

研究生: 劉申禾
Liu, Shen-Ho
論文名稱: 基於GPU平行計算之視覺里程計
Visual Odometry Based on GPU Computation
指導教授: 許陳鑑
Hsu, Chen-Chien
王偉彥
Wang, Wei-Yen
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 中文
論文頁數: 97
中文關鍵詞: 視覺里程計SURFPerspective-Three-Points(P3P)GPUTX2
英文關鍵詞: Visual Odometry, SURF, Perspective-Three-Points(P3P), GPU, TX2
DOI URL: http://doi.org/10.6345/THE.NTNU.DEE.006.2018.E08
論文種類: 學術論文
相關次數: 點閱:98下載:9
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文係針對視覺里程計(Visual Odometry, VO)系統進行改良,加入地標管理、key-frame選擇機制以及攝影機定位之修正模型來提高視覺里程計的定位準確性,並利用GPU平行運算的優勢,實現一高運算效率的系統,使機器人在行走中,能夠即時地推算自身相對於初始位置的狀態。視覺里程計的改良主要是利用SURF演算法提取特徵點,並利用比對窮舉法比對當下特徵與地標, key-frame選擇機制以避免多餘的運算量,並加入地標管理機制來濾除、新增地標點,以及判斷攝影機有效解之機制來解決P3P演算法兩面解之問題,最後使用P3P及RANSAC演算法推算出攝影機位置,為解決累積誤差的問題本論文加入攝影機定位之修正模型來提升視覺里程計的定位準確性。為了達到即時性,本論文利用GPU平行運算的優勢,執行SURF演算法並搭配比對窮舉法找出定位之特徵點、以及針對P3P以及RANSAC適合之架構進行設計,並利用異質運算,亦即CPU搭配GPU,將整個VO系統實現在TX2嵌入式系統上,因此,整體運算的效率得以大幅提升。實驗結果顯示,相較於只有使用CPU運算速度而言,異質運算在整體的效能提升了約80~90倍之多,顯示本論文基於GPU平行計算之視覺里程計可提供一個低成本、低功耗、可攜性、高效能且即時性之視覺里程計系統,達到即時視覺里程計之目的。

    In this thesis, an improved visual odometry (VO) system is proposed based on novel map management, key-frame selection, and a camera pose correction model. To enhance computational efficiency of the VO system, the proposed approach implements a graphics processing unit (GPU), SURF feature detection and description algorithms to extract features from an image; an exhaustive search algorithm is introduced to match features. To minimize computation time, a key-frame selection mechanism is proposed to distinguish key-frames among the input images. Moreover, map management is proposed to filter out unstable landmarks and add features for a reliable estimation of the relative camera pose. Additionally, a method to validate the camera pose is proposed to solve the problem of two possible poses resulting from the use of the perspective-three points (P3P) algorithm. Estimation accuracy is improved by basing the enhanced VO system on a camera pose correction model. Furthermore, to accelerate execution efficiency, GPU implementation of the proposed VO system is developed, taking advantage of parallel computation. In this thesis, the entire VO system is implemented on a TX2 embedded system under a heterogeneous computing architecture to increase the efficiency of the overall system. Several experiments are conducted for validation using an ASUS Xtion 3D camera and a laptop. Average errors of pose estimations are compared with the conventional VO to show the effectiveness of the proposed VO system. By utilizing GPU for the algorithm including SURF, exhaustive search, and pose estimation, a real-time VO system is realized with the performance with low in cost and power consumption, high processing efficiency, and easier portability. Experimental results show that its required computation time for overall performance based on heterogeneous computing is approximately 80 to 90 times faster than using only CPU computing.

    摘要 i ABSTRACT ii 致謝 iv 表目錄 viii 圖目錄 x 第一章 緒論 1 1.1 研究背景與動機 1 1.2 文獻探討 2 1.3 論文架構 5 第二章 視覺里程計之探討 7 2.1 攝影機量測模型 8 2.2 感測模型 10 2.2.1 影像扭曲模型 10 2.2.2 影像修正模型 12 2.3 SURF特徵提取演算法 13 2.4 三點透視問題(P3P) 17 2.5 隨機取樣一致(RANSAC)演算法 22 第三章 增強型視覺里程計之設計 25 3.1 地標三維座標量測 26 3.2 地標管理 29 3.3 Key-Frame 選擇機制 30 3.4 判斷攝影機位置解的機制 32 3.5 攝影機定位之修正模型 33 第四章 GPU平行運算實現 37 4.1 GPU架構 37 4.1.1 CUDA運算環境 39 4.1.2 執行緒(thread)結構 40 4.2 GPU SURF模組 41 4.3 GPU比對模組 41 4.3.1 計算特徵點描述向量差值 43 4.3.2 blocks及threads之分派 46 4.3.3 找尋描述向量差值的最小值 47 4.4 GPU求算攝影機位置模組 48 4.4.1 P3P及RANSAC模組 49 4.4.2 共識集合模組 52 第五章 實驗結果與討論 55 5.1 GroundTruth地面基準實驗 55 5.1.1 實驗環境與平台 55 5.1.2 實驗結果比較 59 5.2 GPU各模組效能比較 63 5.2.1 實驗平台 63 5.2.2 GPU SURF效能比較 66 5.2.3 GPU比對模組效能比較 67 5.2.4 GPU求算攝影機位置模組之精確度分析 69 5.2.5 GPU求算攝影機位置模組效能比較 70 5.3 整體效能分析 72 5.4 CPU與異質運算架構效能比較實驗 74 5.5 VO搭載P3-AT四輪機器人移動實驗 84 5.6 VO搭載履帶式機器人移動實驗 87 第六章 結論與未來展望 90 6.1 結論 90 6.2 未來展望 90 參考文獻 92 自傳 96 學術成就 97

    [1] C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: fast semi-direct monocular visual odometry,” IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, June, 2014.
    [2] D. Scaramuzza, and F. Fraundorfer, “Visual odometry: part I: the first 30 years and fundamentals,” IEEE Robotics & Automation Magazine, Vol. 18, pp. 80-92, 2011.
    [3] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, 2004.
    [4] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “SURF: speeded up robust features,” European Conference on Computer Vision (ECCV), Graz, May, 2006, pp. 404-417.
    [5] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. Van Gool, “A comparison of affine region detectors,” International Journal of Computer Vision, Vol. 65, No. 1-2, pp.43-72, 2005.
    [6] P. Viola, and M. Jones, “Rapid object detection using a boosted cascade of simple features,” IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Kauai, December, 2001, pp. 511-518.
    [7] S. J. Ha, S. H. Lee, and N. I. Cho, “Discrimination and description of repetitive patterns for enhancing object recognition performance,” IEEE International Conference on Image Processing (ICIP), Brussels, September, 2011, pp. 2377-2380.
    [8] O. Faugeras, Three-dimensional computer vision: a geometric viewpoint. MIT Press, Cambridge, MA, USA, 1993.
    [9] R. Hartley and A. Zisserman, Multiple view geometry in computer vision, 2nd ed. Cambridge: Cambridge University Press, 2004.
    [10] S. Li, C. Xu, and M. Xie, “A robust O (n) solution to the perspective-n-point problem,” IEEE transactions on pattern analysis and machine intelligence, Vol. 34, No. 7, 2012, pp.1444-1450.
    [11] D. DeMenthon and L. Davis, “Exact and approximate solutions of the perspective-Three-Point problem,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, no. 11, pp. 1100-1105, Nov. 1992.
    [12] M. Fischler and R. Bolles, “Random sample consensus: a paradigm for model Fitting with Applications to Image Analysis and Automated Cartography,” Communication of the ACM, Vol. 24, no. 6, pp. 381-395, 1981.
    [13] R. Haralick, C. Lee, K. Ottenberg, and M. Nolle, “Analysis and solutions of the three point perspective pose estimation problem,” IEEE International Conference on Computer Vision and Pattern Recognition, Maui, USA, April, 1991.
    [14] L. Quan and Z. Lan, “Linear n-point camera pose determination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 8, pp. 774-780, 1999.
    [15] A. Ansar and K. Daniilidis, “Linear pose estimation from points or lines,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 5, pp. 578–589, 2003.
    [16] V. Lepetit, F. Moreno-Noguer, and P. Fua. Epnp, “An accurate O(n) solution to the pnp problem,” International Journal of Computer Vision, Vol. 81, No. 2, pp. 578-589, 2009.
    [17] L. Kneip, D. Scaramuzza, and R. Siegwart, “A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation,” IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Colorado, June, 2011, pp. 2969-2976.
    [18] A. Masselli, and A. Zell, “A new geometric approach for faster solving the perspective-three-point problem,” IEEE International Conference on Pattern Recognition (ICPR), Sweden, August, 2014, pp. 2119-2124.
    [19] T. Ke, and S. I. Roumeliotis, “An efficient algebraic solution to the perspective-three-point problem,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Hawaii, July, 2017, pp. 4618 – 4626.
    [20] D. Nist´er, O. Naroditsky, and J. Bergen, “Visual odometry for ground vehicle applications,” Journal of Field Robotics, Vol. 23, No. 1, 2006.
    [21] J. Fabian, and G. M. Clayton, “Error analysis for visual odometry on indoor, wheeled mobile robots with 3-d sensors,” IEEE/ASME Transactions on Mechatronics, Vol. 19, No. 6, pp. 1896-1906, 2014.
    [22] B. Triggs, P. McLauchlan, R. Hartley, and A. Fitzgibbon, “Bundle adjustment a modern synthesis,” International workshop on vision algorithms, Berlin, September, 2000, pp. 298–372.
    [23] T.-C. Dong-Si, and A. I. Mourikis, “Consistency analysis for sliding-window visual odometry,” IEEE International Conference on Robotics and Automation (ICRA), Minnesota, May, 2012, pp. 5202-5209.
    [24] 陳萱,“視覺慣性里程計之一致性分析”,淡江大學機械與機電工程學系碩士論文,2016。
    [25] C. H. Lin, J. C. Li, C. H. Liu, and S. C. Chang, “Perfect hashing based parallel algorithms for multiple string matching on graphic processing units,” IEEE Transactions on Parallel and Distributed Systems, Vol. 28, No. 9, pp.2639-2650, 2017.
    [26] M. Ghafoor, S. Iqbal, S. A. Tariq, I. A. Taj, and N. M. Jafri, “Efficient fingerprint matching using GPU,” IET Image Processing, Vol. 12 No. 2, pp. 274-284, 2017.
    [27] P. Enfedaque, F. Auli-Llinas, and J. C. Moure, “GPU implementation of bitplane Coding with Parallel Coefficient Processing for High Performance Image Compression,” IEEE Transactions on Parallel and Distributed Systems, Vol.28, No. 8, pp.2272-2284, 2017.
    [28] J. Lorente, M. Ferrer, M. De Diego, and A. González, “GPU implementation of multichannel adaptive algorithms for local active noise control,” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), Vol.22, No.11, pp. 1624-1635, 2014.
    [29] "NVIDIA Jetson Modules and Developer Kits for Embedded Systems Development", NVIDIA, 2018. [Online]. Available: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems-dev-kits-modules/. [Accessed: July 5, 2018].
    [30] J. Heikkila, and O. Silven, “A four-step camera calibration procedure with implicit image correction,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), UC Berkeley, June, 1997, pp. 1106-1112.
    [31] J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Transactions on pattern analysis and machine intelligence, Vol. 22, No. 10, pp. 1066-1077, 2000.
    [32] X. Wang and H. Zhu, “On the comparisons of unit dual quaternion and homogeneous transformation matrix,” Advances in Applied Clifford Algebras, Vol. 24, No. 1, pp. 213-229, 2014.
    [33] L. Sciavicco and B. Siciliano, “Modeling and control of robot manipulators,” Springer Science & Business Media, 2012.
    [34] S.-H. Liu, C.-C. Hsu, W.-Y. Wang, M.-Y. Chen, and Y.-T. Wang “Improved visual odometry system based on kinect RGB-D sensor,” International Conference on Consumer Electronics, Berlin, September, 2017.
    [35] E. W. Weisstein, “Least squares fitting,” MathWorld–A Wolfram Web Resource, 2008.
    [36] CUDA基礎,網址:http://epaper.gotop.com.tw/pdf/ACL031800.pdf [Accessed: July 5, 2018].
    [37] ASUS Xtion PRO LIVE台灣,網址:https://www.asus.com/tw/3D-Sensor/Xtion_PRO_LIVE/. [Accessed: July 5, 2018].
    [38] StereoLAB ZED 2K Stereo Camera, website: https://www.stereolabs.com/. [Accessed: July 5, 2018].
    [39] 顏愷君,”基於3D特徵地圖之分散式架構巡邏履帶式機器人”,國立臺灣師範大學電機工程學系碩士論文,2017。

    下載圖示
    QR CODE