簡易檢索 / 詳目顯示

研究生: 張乃祥
Chang, Nai-Hsiang
論文名稱: 基於深度學習影像辨識之兩輪平衡車追隨系統
Following Systems for Two-Wheeled Self-Balancing Mobile Robots Based on Deep Learning Image Recognition
指導教授: 王偉彥
Wang, Wei-Yen
許陳鑑
Hsu, Chen-Chien
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 70
中文關鍵詞: 兩輪平衡車機器人作業系統深度學習影像辨識
英文關鍵詞: Two-wheeled self-balancing mobile robot, ROS, Deep learning image recognition
DOI URL: http://doi.org/10.6345/NTNU202000035
論文種類: 學術論文
相關次數: 點閱:210下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 為了達到人類搬運行李及短程移動時省力的效果,本論文提出兩輪平衡車架構的電子輔具,且基於Robot Operating System (ROS)之軟體架構下進行開發,具備四種功能模式提供使用者做使用。該平衡車使用模糊理論來達到自主平衡的控制,由於模糊控制器的設計並不需要運動學的模型,僅需要使用專家經驗並由機器的實際輸出表現來微調參數即可,所以在控制器的設計相當簡便且能達到良好的控制表現。本論文設計了四種功能模式,分別為手拉車模式、遙控模式、載人模式和追隨模式,其中以手拉車模式和追隨模式為主要的輔具功能。手拉車模式為兩輪平衡車上方安裝手把來供使用者進行行李車的拖拉,手把上有一搖桿用來控制拖拉車的轉彎,可以用來當作省力的行李搬運車;追隨模式則是使用單眼攝影機結合深度學習來辨識目標物並且追隨,而深度學習影像辨識的模型是使用Single shot multi-box detector (SSD)。最後,透過實驗來驗證兩輪平衡車成為電子輔具可行性。

    The purpose of this thesis is to achieve labor saving for humans when carrying luggage and short-range movements. This thesis proposes a two-wheeled self-balancing mobile robot which is developed based on Robot Operating System (ROS). It has four modes for operators and utilize fuzzy theory to achieve autonomous balance control. Because the design of the fuzzy controller does not require the model of the mobile robot, it only needs to utilize the expert experience and fine-tune the parameters based on the actual output performance of the mobile robot. The controller design is quite simple and achieves good control performance. This thesis has designed four functional modes, namely handcart mode, remote mode, vehicle mode, and following mode. Among them, handcart mode and following mode are the main functions. The handcart mode is a handlebar installed above the two-wheeled self-balancing mobile robot for the operator to drag. There is a joystick on the handlebar to control the direction of moving. In this mode, the two-wheeled self-balancing mobile robot can be utilized as a labor-saving luggage carrier. The following mode utilizes a monocular camera with deep learning to identify and follow the operator. The model of deep learning image recognition is Single shot multi-box detector (SSD). Finally, the feasibility of the proposed following system using a two-wheeled self-balancing mobile robot is verified by some experiments.

    誌  謝 i 摘  要 ii ABSTRACT ii 目  錄 iv 表 目 錄 vii 圖 目 錄 ix 第一章 緒論 1 1.1 研究動機 1 1.2 文獻探討 2 1.3 論文架構 9 第二章 硬體架構與設計 10 2.1 兩輪平衡車機構 10 2.2 運算核心 13 2.3 馬達系統 15 2.4 電力系統 18 2.5 感測器系統 20 第三章 ROS系統架構設計 27 3.1 機器人作業系統 27 3.2 兩輪平衡車之ROS架構 32 第四章 移動控制及功能模式設計 34 4.1 運動控制器設計 34 4.2 功能模式設計 45 第五章 深度學習影像辨識 49 5.1 訓練資料 49 5.2 目標檢測模型架構 51 第六章 實驗結果與討論 58 6.1 平衡控制實驗 58 6.2 前進控制實驗 60 6.3 手拉車實驗 62 6.4 追隨控制實驗 63 第七章 結論與未來展望 66 7.1 結論 66 7.2 未來展望 67 參考文獻 68

    [1] H. G. Nguyen, J. Morrell, K. D. Mullens, A. B. Burmeister, S. Miles, N. Farrington, K. M. Thomas, and D. W. Gage, “Segway robotic mobility platform,” in Proc. SPIE 5609, Mobile Robots XVII, Philadelphia, United States, Dec. 2004, pp.207-220.
    [2] L. Ojeda, M. Raju, and J. Borenstein, “FLEXnav: a fuzzy logic expert dead-reckoning system for the Segway RMP,” in Proc. SPIE 5422, Unmanned Ground Vehicle Technology VI, Orlando, United States, Sep. 2004, pp.11-23.
    [3] Lidar, URL: https://en.wikipedia.org/wiki/Lidar
    [4] R. Babazadeh, A. G. Khiabani, and H. Azmi, “Optimal control of personal transporter,” in Proc. International Conference on Control, Instrumentation, and Automation (ICCIA), Qazvin, Iran, Jan. 2016, pp.27-28.
    [5] H. W. Kim and S. Jung, “Fuzzy logic application to a two-wheel mobile robot for balancing control performance,” International Journal of Fuzzy Logic and Intelligent Systems, vol. 12, no. 2, pp. 154-161, June 2012.
    [6] 黃正豪,“兩輪自走車之設計與實現以NIOS為核心之行動控制”,國立中央大學電機系碩士論文,95年6月。
    [7] 李垂憲,“兩輪自走車之設計與實現以NIOS為核心之基本控制”,國立中央大學電機系碩士論文,95年6月。
    [8] 孫冠群,“搭載於二輪式自動平衡移動平台之布袋戲偶機器人系統”,國立交通大學電機與控制工程研究所碩士論文,98年7月。
    [9] 李冠東,“兩輪滑板車物體追隨控制之設計與實現”,國立臺灣師範大學電機系碩士論文,105年7月。
    [10] Miniplus, URL: https://www.segway-ninebot.tw
    [11] Fiido SE3, URL: https://item.jd.com/8787819.html
    [12] Cowarobot R1, URL: http://www.cowarobot.com/solution.html
    [13] Cowarobot R1, URL: http://www.l99.com/EditText_view.action?textId=9563521
    [14] PID controller, URL: https://en.wikipedia.org/wiki/PID_controller
    [15] Fuzzy controller, URL: https://en.wikipedia.org/wiki/Fuzzy_control_system
    [16] Genetic Algorithm, URL: https://en.wikipedia.org/wiki/Genetic_algorithm
    [17] Artificial neural network, URL: https://en.wikipedia.org/wiki/Artificial_neural_network
    [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural network,” in Proc. Neural Information Processing Systems Conference, America, Dec. 2012.
    [19] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), America, Dec. 2014, pp. 580-587.
    [20] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-Time object detection with region proposal networks,” in Proc. Neural Information Processing Systems Conference, Canada, Dec. 2015.
    [21] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, real-time object detection,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), America, Dec. 2016, pp 779-788.
    [22] W. Liu, D. Anguelow, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg, “SSD: Single Shot MultiBox Detector,” in Proc. European Conference on Computer Vision, Amsterdam, Sep. 2016, pp. 21-37.
    [23] Udoo Quad, URL: https://www.udoo.org/udoo-dual-and-quad/
    [24] Gigabyte Brix, URL: https://www.gigabyte.com/tw/Mini-PcBarebone/GB-BXi5H-4200-rev-10#ov
    [25] Razor 9dof IMU, URL: https://www.sparkfun.com/products/14001
    [26] Joystick, URL: https://exploreembedded.com/wiki/Analog_JoyStick_with_Arduino
    [27] Ultrasonic, URL: https://howtomechatronics.com/tutorials/arduino/ultrasonic-sensor-hc-sr04/
    [28] Motor Encoder, URL: https://www.goto-automation.com/products/frequency-inverters/accessories/encoders/h40-8-1024vl-rotary-encoder-hollow-shaft-5-24v-dc-cable
    [29] FEIYUTECH SPG, URL: http://www.feiyu-tech.com/spg/
    [30] Logitech C922, URL: https://www.logitech.com/zh-tw/product/c922-pro-stream-webcam
    [31] ROS tutorial, URL: http://wiki.ros.org/ROS/Tutorials
    [32] ROS Distribution, URL: http://wiki.ros.org/Distributions
    [33] ROS Visualization, URL: http://wiki.ros.org/visualization/Tutorials
    [34] J. Wu, W. Zhang, and S. Wang, “A two-wheeled self-balancing robot with the fuzzy PD control method,” Mathematical Problems in Engineering, vol. 2012, Article ID 469491, Nov. 2012.
    [35] S. Ahmad, M. O. Tokhi, and S. F. Toha, “Genetic algorithm optimisation for Fuzzy Control of wheelchair lifting and balancing,” in Proc. 2009 Third UKSim European Symposium on Computer Modeling and Simulation, Greece, 2008, pp.97-101.
    [36] C. Ning, H. Zhou, Y. Song, and J. Tang, “Inception Single Shot MultiBox Detector for object detection,” in Proc. IEEE International Conference on Multimedia and Expo Workshops (ICMEW), China, Sep. 2017.
    [37] R. Rothe, M. Guillaumin, and L. V. Gool, “Non-maximum suppression for object detection by passing messages between windows,” in Proc. Asia Conference on Computer Vision, Switzerland, Apr. 2014, pp. 290-306.
    [38] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for large-scale image recognition,” in Proc. International Conference on Learning Representations, America, Apr. 2015.
    [39] F. Yu and V. Koltum, “Multi-scale context aggregation by dilated convolutions,” in Proc. International Conference on Learning Representations, America, Apr. 2016.
    [40] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelow, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), America, 2015, pp. 1-9.
    [41] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), America, 2016, pp. 770-778.

    無法下載圖示 電子全文延後公開
    2025/01/10
    QR CODE