簡易檢索 / 詳目顯示

研究生: 周毅安
Guilherme Christmann
論文名稱: 人形機器人於電動機車之平衡與轉向控制
Balance and Steering Control of a Humanoid Robot on an Electric Scooter
指導教授: 包傑奇
Jacky Baltes
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 58
英文關鍵詞: Humanoid Robotics, Classical Control
DOI URL: http://doi.org/10.6345/NTNU202100111
論文種類: 學術論文
相關次數: 點閱:149下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Autonomous vehicles and (humanoid) robotics are two fields that have greatly benefited from the rapid pace of development of deep learning systems. With intricate systems, it is possible to perform complex tasks and behaviors directly from raw sensor data. In light of recent developments in the aforementioned fields, it is necessary to start making efforts towards challenges that lie on their junction. From the perspective of humanoid robots, if we want to reach truly general-purpose robots it is of importance that they are capable of operating in any human environment. That includes the operation of vehicles, which we believe pose an interesting challenge for the state-of-the-art in both fields. This work focuses on the control and operation of a two-wheeled scooter using a large sized humanoid robot. A 3D model of the robot and scooter system was developed using CAD software, as well as physics based simulation environments. The subject of this study was the development of a steering-based control system. Two controllers were developed, analyzed and compared: a PID controller and a reinforcement learning control. Both controls were able to balance and track trajectories, and both performed better under different conditions. Advantages and limitations of applying these controllers to the real robot are also discussed.

    Acknowledgements i Abstract ii Table of Contents iii List of Tables v List of Figures vi List of Symbols x Chapter 1 Introduction 1 1.1 Aims and Objectives 2 1.1.1 General Aim 3 1.1.2 Objectives 3 Chapter 2 Literature Review 4 2.1 Two-wheeled Vehicle Dynamics 4 2.2 Reinforcement Learning 7 2.2.1 PPO Proximal Policy Optimization 9 Chapter 3 Robot -Scooter System 12 3.1 The Two -Wheeled Vehicle Gogoro Scooter 12 3.2 The Robot THORMANG3 13 3.3 Robot-Scooter System 15 Chapter 4 Methodology 17 4.1 PID Controller 17 4.1.1 Balance Control 17 4.1.2 Direction Control – Path Tracking 18 4.2 RL Agent – PPO 20 4.2.1 Balance Control 21 4.2.2 Commanded Control – Path Tracking 22 Chapter 5 Results and Discussion 24 5.1 Balance Control – PID Controller 24 5.1.1 PID – Upright Balance (Tilt Reference = 0°) 24 5.1.2 PID – Balance while turning (Tilt Reference ≠ 0°) 26 5.1.3 PID – Upright Balance under Disturbances (Tilt Reference = 0) 27 5.2 Balance Control – RL Agent 29 5.2.1 RL – Upright Balance (Tilt Reference = 0°) 30 5.2.2 RL – Upright Balance under Disturbances (Tilt Reference = 0) 32 5.3 Path Tracking Control PID Controller 35 5.3.1 PID – Tracking Sinusoidal Path 36 5.3.2 PID – Tracking Straight Line Path 38 5.3.3 PID – Tracking Straight Line Path under Disturbances 40 5.4 Path Tracking – RL Agent 42 5.4.1 RL Agent – Tracking Sinusoidal Path 43 5.4.2 RL Agent – Tracking Straight Line Path 46 5.4.3 RL Agent – Tracking Straight Line under Disturbances 49 5.5 Comparing the PID controller and the RL agent 50 5.6 Real Robot Steering Velocity Test 52 Chapter 6 Conclusion and Future Work 54 References 56

    [1] K. J. Astrom, R. E. Klein, and A. Lennartsson, “Bicycle dynamics and control: adapted bicycles for education and research,” IEEE Control Systems Magazine, vol. 25, no. 4, pp. 26–47, 2005.

    [2] M. Yamakita, A. Utano, and K. Sekiguchi, “Experimental study of automatic control of bicycle with balancer,” in 2006 IEEE/RSJInternational Conference on Intelligent Robots and Systems, pp. 5606–5611, IEEE, 2006.

    [3] U. Franke, D. Gavrila, S. Gorzig, F. Lindner, F. Puetzold, and C. Wohler, “Autonomous driving goes downtown,” IEEE Intelligent Systems and their Applications, vol. 13, no. 6, pp. 40–48, 1998.

    [4] C. Thorpe, M. Herbert, T. Kanade, and S. Shafer, “Toward autonomous driving: the cmu navlab. i.perception,” IEEE Expert, vol. 6, no. 4, pp. 31–42, 1991.

    [5] Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning”, Nature, vol. 521, no. 7553, pp. 436–444, 2015.

    [6] F. Lambert, “Tesla to release new full self­driving beta update w/ “fundamental improvements,” wider release could be coming soon,” Electrek, 2020.

    [7] N. Sünderhauf, O. Brock, W. Scheirer, R. Hadsell, D. Fox, J. Leitner, B. Upcroft, P. Abbeel, W. Bur­gard, M. Milford,etal., “The limits and potentials of deep learning for robotics,” The International Journal of Robotics Research, vol. 37, no. 4­5, pp. 405–420, 2018.

    [8] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015.

    [9] S. Singhania, I. Kageyama, and V. M. Karanam, “Study on low­speed stability of a motorcycle,” Applied Sciences, vol. 9, no. 11, p. 2278, 2019.

    [10] N. K. Vu and H. Q. Nguyen, “Balancing control of two­wheel bicycle problems,” Mathematical Problems inEngineering, vol. 2020, 2020.

    [11] Y. Tanaka and T. Murakami, “Self sustaining bicycle robot with steering controller,” in The 8th IEEE International Workshop on Advanced Motion Control,2004.AMC’04., pp. 193–197, IEEE, 2004.

    [12] L. X. Wang, J. M. Eklund, and V. Bhalla, “Simulation & road test results on balance and directional control of an autonomous bicycle,” in 2012 25th IEEE Canadian Conference on Electrical and Computer Engineering(CCECE), pp. 1–5, IEEE, 2012.

    [13] S. Lee and W. Ham, “Self stabilizing strategy in tracking control of unmanned electric bicycle with mass balance,” in IEEE/RSJ international conference on intelligent robots and systems, vol. 3,pp. 2200–2205, IEEE, 2002.

    [14] Y. Kim, H. Kim, and J. Lee, “Stable control of the bicycle robot on a curved path by using a reaction wheel,” Journal of Mechanical Science and Technology, vol. 29, no. 5, pp. 2219–2226, 2015.

    [15] A. Sikander and R. Prasad, “Reduced order modelling based control of two wheeled mobile robot,” Journal of Intelligent Manufacturing, vol. 30, no. 3, pp. 1057–1067, 2019.

    [16] M. Yamakita and A. Utano, “Automatic control of bicycles with a balancer,” in Proceedings, 2005 IEEE/ASME International Conference on Advanced Intelligent Mechatronics., 2005.

    [17] C.­F. Huang, Y.­C. Tung, and T.­J. Yeh, “Balancing control of a robot bicycle with uncertain center of gravity,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5858–5863,IEEE, 2017.

    [18] C.­F. Huang, Y.­C. Tung, H.­T. Lu, and T.­J. Yeh, “Balancing control of a bicycle­riding humanoid robot with center of gravity estimation,” Advanced Robotics, vol. 32, no. 17, pp. 918–929, 2018.

    [19] R. S. Sutton and A. G. Barto, "Reinforcement Learning: An Introduction." MIT press, 2018.

    [20] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.

    [21] J. Peters and S. Schaal, “Policy gradient methods for robotics,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2219–2225, IEEE, 2006.

    [22] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in International Conference on Machine Learning, pp. 1928–1937, 2016.

    [23] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in International Conference on Machine Learning, pp. 1889–1897, 2015.63

    [24] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.

    [25] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, 2016.

    [26] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self­normalizing neural networks,” Advances in Neural Information Processing Systems, vol. 30, pp. 971–980, 2017.

    下載圖示
    QR CODE