簡易檢索 / 詳目顯示

研究生: 紀鴻文
Ji, Hong-Wen
論文名稱: 基於ROS之智慧安防自主巡邏履帶式機器人系統
Autonomous Patrolling Tracked Robot System for Intelligent Security Based on ROS
指導教授: 王偉彥
Wang, Wei-Yen
口試委員: 王偉彥
Wang, Wei-Yen
翁慶昌
Wong, Ching-Chang
盧明智
Lu, Ming-Chin
呂成凱
Lu, Cheng-Kai
許陳鑑
Hsu, Chen-Chien
口試日期: 2022/08/17
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 73
中文關鍵詞: 履帶式機器人Kinect v2攝影機障礙物偵測人體動作辨識監控系統模糊理論模糊類神經網路
英文關鍵詞: tracked robot, Kinect v2 camera, obstacle detection, human movement recognition, monitor system, fuzzy theory, fuzzy neural network
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202201586
論文種類: 學術論文
相關次數: 點閱:150下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文主要將深度感測器與自主式履帶機器人整合,並針對障礙物偵測與人體動作辨識這兩方面各自提出一種系統。在障礙物偵測系統中,運用深度影像使機器人能夠偵測前方空間中的障礙物,並結合模糊控制器控制機器人安全避開。在人體動作辨識系統中,藉由Kinect v2攝影機取得人體骨架,並透過事先訓練好的模糊類神經網路進行即時動作辨識,以觀察是否作出危險動作。除了以上兩種系統外,還增加監控系統的使用者介面,並透過3台Mesh架構的路由器來跟履帶式機器人相互溝通,以此來傳遞影像資訊、地圖位置、任務要求、顯示警示燈等功能。

    This thesis focuses on the integration of a depth sensor with an autonomous tracked robot, and proposes a system for both obstacle detection and human movement recognition. In the obstacle detection system, depth image is used to enable the robot to detect obstacles in the space ahead and to control the robot to avoid them safely in conjunction with a fuzzy controller. In the human movement recognition system, the human skeleton is captured by a Kinect v2 camera and a pre-trained fuzzy neural network is used to perform real-time motion recognition to see if a dangerous action is taken. In addition to these two systems, a user interface is added to the monitor system, and three mesh-based routers are used to communicate with the tracked robots to transmit video information, map locations, task requirements, display warning lights and other functions.

    第一章 緒論 1 1.1 研究背景與動機 1 1.2 論文架構 3 第二章 文獻探討 5 2.1 巡邏機器人 5 2.2 機器人避障功能 6 2.3 人體動作辨識偵測 10 第三章 基於深度影像之模糊控制避障系統 13 3.1 系統流程 13 3.2 深度影像之預處理 14 3.3 障礙物判斷 17 3.4 模糊控制器設計 19 3.4.1 定義輸入及輸出變數 19 3.4.2 決定模糊化的策略 20 3.4.3 模糊規則之建立與推論 22 3.4.4 解模糊化的策略 24 第四章 聚類分析結合合併式模糊神經網路之動作辨識系統 25 4.1 系統流程 25 4.2 Kinect v2攝影機 26 4.3 骨架特徵提取 27 4.4 滑動視窗 28 4.5 基於聚類分析法之動作特徵分類 29 4.6 模糊神經網路 32 4.7 合併式模糊神經網路 34 4.8 投票系統 37 第五章 使用者介面系統 38 5.1 ROS通訊架構介紹 38 5.2 Qt架構介紹 39 5.3 路由器架構介紹 40 5.4 使用者介面功能介紹 42 第六章 實驗結果與分析 44 6.1 基於深度影像之模糊控制避障系統實驗 44 6.1.1 實驗環境介紹 44 6.1.2 實驗硬體介紹 46 6.1.3 Xtion感測器規格介紹 47 6.1.4 模糊控制避障實驗與分析 48 6.2 聚類分析結合模糊類神經網路之動作辨識系統實驗 54 6.2.1 Kinect v2攝影機數據測試 54 6.2.2 錄製動作資料庫 59 6.2.3 聚類分析結合合併式模糊類神經網路之訓練與分析 61 6.2.4 異常動作辨識實驗 65 第七章 結論與未來展望 67 7.1 結論 67 7.2 未來展望 67 參考文獻 69

    [1] K. D. Joshi and B. W. Surgenor, "Small parts classification with flexible machine vision and a hybrid classifier," in 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), 2018: IEEE, pp. 1-6.
    [2] M. Li, H. Liu, D. Xu, and C. Lu, "Research on the Mechanical Zero Position Capture and Transfer of Steering Gear Based on Machine Vision," in 2021 2nd International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), 2021: IEEE, pp. 1-6.
    [3] iRobot Home Cleaning Robots, URL: http://store.irobot.com/home/index.jsp
    [4] Pepper the robot, URL: https://www.aldebaran.com/en/a-robots/who-is-pepper
    [5] Amazon Robotics, URL: http://www.kivasystems.com/
    [6] K. Osugi, K. Miyauchi, N. Furui, and H. Miyakoshi, “Development of the scanning laser radar for ACC system,” JSAE review, vol. 20, no. 4, Oct. 1999, pp. 549-554.
    [7] H. T. shin, “Vehicles crash proof laser radar,” M.S. thesis, Opt. Sci. Center, National Central University, Chung Li City, Taiwan, R.O.C., 2000.
    [8] 陳政傑,“自走式機器人之雷射定位與路徑規劃”,國立雲林科技大學,碩士論文,95年7月。
    [9] S. K. Park, J. H. Jung, and K. W. Lee, “Mobile robot navigation by circular path planning algorithm using camera and ultrasonic sensor,” in Proc. IEEE International Symposium on Industrial Electronics, July 2009, pp. 1749-1754.
    [10] A. Caarullo and M. Parvis, “An ultrasonic sensor for distance measurement in automotive applications,” IEEE Sensors Journal, vol. 1, no. 3, pp. 143-147, Oct. 2001.
    [11] K. Kaliyaperumal, S. Lakshmanan, and K. Kluge, “An algorithm for detecting roads and obstacles in radar images,” IEEE Trans. on Vehicular Technology, vol. 50, pp. 170-182, Jan. 2001.
    [12] Y. Watanabe, "Automated optical inspection of surface mount components using 2D machine vision," in 15th Annual Conference of IEEE Industrial Electronics Society, 1989: IEEE, pp. 584-589.
    [13] A. Saleem, K. Al Jabri, A. Al Maashri, W. Al Maawali, and M. Mesbah, "Obstacle-Avoidance Algorithm Using Deep Learning Based on RGBD Images and Robot Orientation," in 2020 7th International Conference on Electrical and Electronics Engineering (ICEEE), 2020: IEEE, pp. 268-272.
    [14] Z. Jiang, Q. Zhao, and Y. Tomioka, "Depth image-based obstacle avoidance for an in-door patrol robot," in 2019 International Conference on Machine Learning and Cybernetics (ICMLC), 2019: IEEE, pp. 1-6.
    [15] J. Ou, X. Guo, M. Zhu, and W. Lou, "Autonomous quadrotor obstacle avoidance based on dueling double deep recurrent Q-learning with monocular vision," Neurocomputing, vol. 441, pp. 300-310, 2021.
    [16] 陳奕涵,“影像式單攝影機之機器人動態避障路徑系統”,國立臺灣師範大學應用電子科技學系碩士論文,102年7月。
    [17] 蕭智偉,“主動式履帶機器人應用於連續樓梯攀爬與避障策略之研究”,國立臺灣師範大學電機工程學系碩士論文,104年7月。
    [18] L. Xia, C. Chen, and J. Aggarwal, “View invariant human action recognition using histograms of 3D joints,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, Jun. 2012, pp. 20-27.
    [19] J. Liu, A. Shahroudy, D. Xu, and G. Wang, “Spatio-temporal lstm with trust gates for 3d human action recognition,” in Proc. European Conference on Computer Vision, Springer, Cham, Sep. 2016, pp. 816-833.
    [20] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Jul. 2017, pp. 7291-7299.
    [21] H.-S. Fang, S. Xie, Y.-W. Tai, and C. Lu, “RMPE: Regional multi-person pose estimation,” in Proc. IEEE International Conference on Computer Vision (ICCV), Venice, Italy, Oct. 2017, pp. 2334-2343.
    [22] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Jun. 2015, pp. 2625-2634.
    [23] S. Ji, W. Xu, M. Yang, and K. Yu, “3D convolutional neural networks for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 221-231, Jan. 2013.
    [24] P.-J. Hwang, W.-Y. Wang, and C.-C. Hsu, “Development of a mimic robot-learning from demonstration incorporating object detection and multiaction recognition,” IEEE Consumer Electronics Magazine, vol. 9, no. 3, pp. 79-87, May 2020.
    [25] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in Proc. Conference and Workshop on Neural Information Processing Systems, 2014, pp. 568-576.
    [26] X. Gao et al., "3D skeleton-based video action recognition by graph convolution network," in 2019 IEEE International Conference on Smart Internet of Things (SmartIoT), 2019: IEEE, pp. 500-501.
    [27] K. Su, X. Liu, and E. Shlizerman, "Predict & cluster: Unsupervised skeleton based action recognition," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9631-9640.
    [28] 洪權甫,“藉由物件偵測與多動作識別之機器人演示學習系統”,國立臺灣師範大學電機工程學系碩士論文,111年1月。
    [29] M. Rous, H. Lupschen, and K. F. Kraiss, “Vision-based indoor. scene analysis for natural landmark detection,” in Proc. IEEE International Conference on Robotics and Automation, Barcelona, Spain, April 2005, pp. 4642-4647.
    [30] E. M. Petriu, “Automated guided vehicle with absolute encoded guide-path,” IEEE Trans. on Robotics and Automation, vol. 7, no.4, pp. 562-565, Aug. 1991.
    [31] H. Y. Cheng, B. S. Jeng, P. T Tseng, and K.-C. Fan, “Lane detection with moving vehicles in the traffic scenes,” IEEE Trans. on Intelligent Transport System, vol. 7, no. 4, pp. 571-582, Dec. 2006.
    [32] G. Grisetti, C. Stachniss and W. Burgard., “Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters,” IEEE Transactions on Robotics, vol. 23, no. 1, pp. 34-46, Feb. 2007.
    [33] S. Kohlbrecher, O. von Stryk, J. Meyer and U. Klingauf., “A flexible and scalable SLAM system with full 3D motion estimation,” in Proc. 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, pp. 155-160, 2011.
    [34] G. Peng et al., "An improved AMCL algorithm based on laser scanning match in a complex and unstructured environment," Complexity, vol. 2018, 2018.
    [35] S.-H. Chan, P.-T. Wu, and L.-C. Fu, "Robust 2D indoor localization through laser SLAM and visual SLAM fusion," in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2018: IEEE, pp. 1263-1268.
    [36] F. Duchoň et al., "Path planning with modified a star algorithm for a mobile robot," Procedia Engineering, vol. 96, pp. 59-69, 2014.
    [37] A. Afram and F. Janabi-Sharifi, "Theory and applications of HVAC control systems–A review of model predictive control (MPC)," Building and Environment, vol. 72, pp. 343-355, 2014.
    [38] D. Fox, W. Burgard, and S. Thrun, "The dynamic window approach to collision avoidance," IEEE Robotics & Automation Magazine, vol. 4, no. 1, pp. 23-33, 1997.
    [39] S. Quinlan and O. Khatib, "Elastic bands: Connecting path planning and control," in [1993] Proceedings IEEE International Conference on Robotics and Automation, 1993: IEEE, pp. 802-807.
    [40] "Coordinate mapping," Microsoft, 21 10 2014. [Online]. Available: https://docs.microsoft.com/en-us/previous-versions/windows/kinect/dn785530(v=ieb.10). [Accessed 29 12 2021].
    [41] zeFrenchy, "stack overflow," [Online]. Available: https://i.stack.imgur.com/LPlP3.png.
    [42] Y.-H. Chien, W.-Y. Wang, and C.-C. Hsu, "Run-time efficient observer-based fuzzy-neural controller for nonaffine multivariable systems with dynamical uncertainties," Fuzzy Sets and Systems, vol. 302, pp. 1-26, 2016.
    [43] W.-Y. Wang, Y.-H. Chien, I. Li, and T.-T. Lee, "MIMO robust control via adaptive TS merged fuzzy models for nonaffine nonlinear systems," 2007.
    [44] LFT24 (民109年6月27日)。ROS 介紹:定義、架構、通信機制。民111年8月10日,取自: https://www.twblogs.net/a/5ef6d19f209c567d16133511
    [45] Alvin.ou (民110年12月16日)。Mesh是什麼?一篇搞懂4大特色與Mesh路由器選購標準。民111年8月10日,取自: https://www.tp-link.com/zh-hk/blog/267/ mesh是什麼-一篇搞懂4大特色與mesh路由器選購標準/
    [46] Harry Lin (民109年12月11日)。Day 11 - Confusion Matrix 混淆矩陣-模型的好壞 (1)。民111年8月10日,取自:https://ithelp.ithome.com.tw/articles/10254593

    無法下載圖示 電子全文延後公開
    2027/09/01
    QR CODE