簡易檢索 / 詳目顯示

研究生: 黃品叡
Hwang, Pin-Jui
論文名稱: 以視覺為基礎之示範學習與協作機器人系統
Vision-Based Learning from Demonstration and Collaborative Robotic Systems
指導教授: 許陳鑑
Hsu, Chen-Chien
口試委員: 許陳鑑
Hsu, Chen-Chien
王偉彥
Wang, Wei-Yen
蔡奇謚
Tsai, Chi-Yi
翁慶昌
Wong, Ching-Chang
王銀添
Wang, Yin-Yien
口試日期: 2022/07/18
學位類別: 博士
Doctor
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 65
英文關鍵詞: robotic systems, learning from demonstration, action recognition, object detection, trajectory planning, collaborative robots, human-robot interaction, robot localization
研究方法: 實驗設計法系統科學方法
DOI URL: http://doi.org/10.6345/NTNU202300385
論文種類: 學術論文
相關次數: 點閱:197下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Robot arms have been widely used in many automated factories over the past decades. However, most conventional robots operate on the basis of pre-defined programs, limiting their responsiveness and adaptability to changes in the environment. When new tasks are deployed, weeks of reprogramming by robotic engineers/operators would be inevitable, with the detriment of downtime, high cost, and time consumption. To address this problem, this dissertation proposes a more intuitive way for robots to perform tasks through learning from human demonstration (LfD), based on two major components: understanding human behavior and reproducing the task by a robot. For the understanding of human behavior/intent, two approaches are presented. The first method uses a multi-action recognition carried out by an inflated 3D network (I3D) followed by a proposed statistically fragmented approach to enhance the action recognition results. The second method is a vision-based spatial-temporal action detection method to detect human actions focusing on meticulous hand movement in real time to establish an action base. For robot reproduction according to the descriptions in the action base, we integrate the sequence of actions in the action base and the key path derived by an object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. In addition to static industrial robot arms, collaborative robots (cobots) intended for human-robot interaction are playing an increasingly important role in intelligent manufacturing. Though promising for many industrial and home service applications, there are still issues to be solved for collaborative robots, including understanding human intention in a natural way, adaptability to execute tasks when the environment changes, and robot mobility to navigate around a working environment. Thus, this dissertation proposes a modularized solution for mobile collaborative robot systems, where the cobot equipped with a multi-camera localization scheme for self-localization can understand the human intention in a natural way via voice commands to execute the tasks instructed by the human operator in an unseen scenario when the environment changes. To validate the proposed approaches, comprehensive experiments are conducted and presented in this dissertation.

    Chapter 1 Introduction 1 1.1 Motivation 2 1.2 Proposed approaches 5 Chapter 2 Architecture of Learning from Demonstration Robotic Systems 8 Chapter 3 Robot Understanding of Lfd Robotic Systems 13 3.1 Object recognition and localization 13 3.2 Action recognition 14 3.2.1 Inflated 3D Network 14 3.2.2 Statistically fragmented approach 15 3.2.3 Spatial-temporal action detection 17 3.3 Development of action base 20 Chapter 4 Robot Reproduction of Lfd Robotic Systems 24 4.1 Motion planning based on action base 24 4.2 Offline induction of object trajectory 24 4.2.1 Object trajectory inductive method 24 4.2.2 Motion planning based on inductive trajectory 31 4.2.3 One-shot mimic through object trajectory 31 Chapter 5 Collaborative Robotic Systems 33 5.1 System architecture 33 5.2 Understanding of human commands 36 5.3 Execution of mobile cobot 40 5.3.1 Object localization 41 5.3.2 Mobile robot navigation incorporating a multi-camera localization system 41 5.3.3 Robot execution / Human-robot collaboration 43 Chapter 6 Experimental Results 46 6.1 Learning from demonstration robotic systems 46 6.1.1 Action recognition based on statistically fragmented approach 46 6.1.2 Object trajectory inductive method 47 6.1.3 Example task 1: manipulation of a coffee maker 49 6.2 Mobile collaborative robotic systems 51 6.2.1 Example task 2: Human collaborated with a fixed robot arm to assemble a wooden chair 52 6.2.2 Multi-camera localization system 54 6.2.3 Example task 3: Human and mobile robot collaborated to assemble a wooden chair 57 Chapter 7 Conclusions 60 References 61 Academic Achievement and Awards 65

    K. S. Jung, H. N. Geismar, M. Pinedo, and C. Sriskandarajah, "Throughput Optimization in Circular Dual‐Gripper Robotic Cells," Production and Operations Management, vol. 27, no. 2, pp. 285-303, 2018.
    M. Foumani and R. Tavakkoli Moghaddam, "A Scalarization-Based Method for Multiple Part-Type Scheduling of Two-Machine Robotic Systems With Non-Destructive Testing Technologies," Iranian Journal of Operations Research, vol. 10, no. 1, pp. 1-17, 2019.
    P.-J. Hwang, C.-C. Hsu, and W.-Y. Wang, "Development of a mimic robot—learning from demonstration incorporating object detection and multiaction recognition," IEEE Consumer Electronics Magazine, vol. 9, no. 3, pp. 79-87, 2020.
    P.-J. Hwang, C.-C. Hsu, P.-Y. Chou, W.-Y. Wang, and C.-H. Lin, "Vision-Based Learning from Demonstration System for Robot Arms," Sensors, vol. 22, no. 7, p. 2678, 2022, doi: 10.3390/s22072678.
    N. Berx, L. Pintelon, and W. Decré, "Psychosocial Impact of Collaborating with an Autonomous Mobile Robot: Results of an Exploratory Case Study," in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 08 March 2021, pp. 280-282.
    A. Bauer, D. Wollherr, and M. Buss, "Human–Robot Collaboration: A Survey," International Journal of Humanoid Robotics, vol. 5, no. 01, pp. 47-66, 2008.
    L. Liu, F. Guo, Z. Zou, and V. G. Duffy, "Application, development and future opportunities of collaborative robots (Cobots) in manufacturing: a literature review," International Journal of Human–Computer Interaction, pp. 1-18, 2022.
    T. Karaulova, K. Andronnikov, K. Mahmood, and E. Shevtshenko, "Lean Automation for Low-Volume Manufacturing Environment," in Proceedings of the 30th International DAAAM Symposium Intelligent Manufacturing & Automation, Zadar, Croatia, 23-26 October 2019, vol. 30.
    H. Lieberman, Your Wish is My Command: Programming by Example. Morgan Kaufmann, 2001.
    D. Kurlander, A. Cypher, and D. C. Halbert, Watch What I Do: Programming by Demonstration. MIT press, 1993.
    A. Billard, S. Calinon, R. Dillmann, and S. Schaal, "Survey: Robot Programming by Demonstration," in Springer Handbook of Robotics, Springrer, Ed.: Springrer, 2008, pp. 1371-1394.
    B. Reiner, W. Ertel, H. Posenauer, and M. Schneider, "Lat: A Simple Learning from Demonstration Method," in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14-18 September 2014, pp. 4436-4441.
    M. Pardowitz, S. Knoop, R. Dillmann, and R. D. Zollner, "Incremental Learning of Tasks from User Demonstrations, Past Experiences, and Vocal Comments," IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 37, no. 2, pp. 322-332, 2007.
    S. Calinon and A. Billard, "Active teaching in robot programming by demonstration," in RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Korea, 26-29 August 2007, pp. 702-707.
    M. Radensky, T. J.-J. Li, and B. A. Myers, "How End Users Express Conditionals in Programming by Demonstration for Mobile Apps," in 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Lisbon, Portugal, 1-4 October 2018, pp. 311-312.
    B. D. Argall, S. Chernova, M. Veloso, and B. Browning, "A Survey of Robot Learning from Demonstration," Robotics and autonomous systems, vol. 57, no. 5, pp. 469-483, 2009.
    P.-J. Hwang, C.-C. Hsu, and W.-Y. Wang, "Development of A mimic robot: learning from human demonstration to manipulate a coffee maker as an example," in 2019 IEEE 23rd International Symposium on Consumer Technologies (ISCT), Ancona, Italy, 21 June 2019, pp. 124-127.
    S. Xu, Y. Ou, Z. Wang, J. Duan, and H. Li, "Learning-Based Kinematic Control Using Position and Velocity Errors for Robot Trajectory Tracking," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2020.
    G. Lentini, G. Grioli, M. G. Catalano, and A. Bicchi, "Robot Programming Wthout Coding," in 2020 IEEE International Conference on Robotics and Automation (ICRA), Virtual Conference, 31 May - 31 August 2020, pp. 7576-7582.
    Z. Cao, H. Hu, Z. Zhao, and Y. Lou, "Robot Programming by Demonstration with Local Human Correction for Assembly," in 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, Yunnan, China, 6-8 December 2019, pp. 166-171.
    D. Papageorgiou and Z. Doulgeri, "Learning by Demonstration for Constrained Tasks," in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August - 4 September 2020, pp. 1088-1093.
    S.-D. Lee, K.-H. Ahn, and J.-B. Song, "Torque Control Based Sensorless Hand Guiding for Direct Robot Teaching," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, South Korea, 9-14 October 2016, pp. 745-750.
    S. Zhang, S. Wang, F. Jing, and M. Tan, "A Sensorless Hand Guiding Scheme Based on Model Identification and Control for Industrial Robot," IEEE Transactions on Industrial Informatics, vol. 15, no. 9, pp. 5204-5213, 2019.
    Y. Park, J. Lee, and J. Bae, "Development of a wearable sensing glove for measuring the motion of fingers using linear potentiometers and flexible wires," IEEE Transactions on Industrial Informatics, vol. 11, no. 1, pp. 198-206, 2014.
    E. Fujiwara, D. Y. Miyatake, M. F. M. dos Santos, and C. K. Suzuki, "Development of a glove-based optical fiber sensor for applications in human-robot interaction," in 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3-6 March 2013, pp. 123-124.
    B. Sadrfaridpour and Y. Wang, "Collaborative assembly in hybrid manufacturing cells: an integrated framework for human–robot interaction," IEEE Transactions on Automation Science and Engineering, vol. 15, no. 3, pp. 1178-1192, 2017.
    C. Breazeal, "Social Interactions In HRI: The Robot View," IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 34, no. 2, pp. 181-186, 2004.
    J. T. C. Tan and T. Arai, "Triple Stereo Vision System for Safety Monitoring of Human-Robot Collaboration in Cellular Manufacturing," in 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM), Tampere Talo, Finland, 25-27 May 2011, pp. 1-6.
    S. Yang, W. Xu, Z. Liu, Z. Zhou, and D. T. Pham, "Multi-Source Vision Perception for Human-Robot Collaboration in Manufacturing," in 2018 IEEE 15th International Conference on Networking, Sensing and Control (ICNSC), Piscataway, New Jersey, 27-29 March 2018, pp. 1-6.
    E. Seemann, K. Nickel, and R. Stiefelhagen, "Head Pose Estimation Using Stereo Vision for Human-Robot Interaction," in Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings., Seoul, Korea, 17-19 May 2004, pp. 626-631.
    M. Hasanuzzaman, V. Ampornaramveth, T. Zhang, M. Bhuiyan, Y. Shirai, and H. Ueno, "Real-Time Vision-Based Gsture Recognition for Human Robot Interaction," in 2004 IEEE International Conference on Robotics and Biomimetics, Shenyang, China, 22-26 August 2004, pp. 413-418.
    Z. Xia et al., "Vision-Based Hand Gesture Recognition for Human-Robot Collaboration: A Survey," in 2019 5th International Conference on Control, Automation and Robotics (ICCAR), Beijing, China, 19-22 April 2019, pp. 198-205.
    J. Lemley, S. Bazrafkan, and P. Corcoran, "Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision," IEEE Consumer Electronics Magazine, vol. 6, no. 2, pp. 48-56, 2017.
    P.-J. Hwang, C.-C. Hsu, W.-Y. Wang, and H.-H. Chiang, "Robot Learning from Demonstration Based on Action and Object Recognition," in Proceedings of the IEEE International Conference on Consumer Electronics, Las Vegas, NV, USA, 4-6 January 2020, pp. 4-6.
    J.-K. Tsai, C.-C. Hsu, W.-Y. Wang, and S.-K. Huang, "Deep learning-based real-time multiple-person action recognition system," Sensors, vol. 20, no. 17, p. 4758, 2020.
    J. Carreira and A. Zisserman, "Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset," in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21-26 July 2017, pp. 6299-6308.
    C. Feichtenhofer, A. Pinz, and A. Zisserman, "Convolutional two-stream network fusion for vdeo action recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27-30 June 2016, pp. 1933-1941.
    S. Habib et al., "Abnormal activity recognition from surveillance videos using convolutional neural network," Sensors, vol. 21, no. 24, p. 8291, 2021.
    B. Akgun, M. Cakmak, K. Jiang, and A. L. Thomaz, "Keyframe-Based Learning from Demonstration," International Journal of Social Robotics, vol. 4, no. 4, pp. 343-355, 2012.
    S. Muench, J. Kreuziger, M. Kaiser, and R. Dillman, "Robot Programming by Demonstration (rpd)-Using Machine Learning and User Interaction Methods for the Development of Easy and Comfortable Robot Programming Systems," in Proceedings of the International Symposium on Industrial Robots, vol. 25: International Federation of Robotics, & Robotic Industries, 1994, pp. 685-685.
    J. Yang, Y. Xu, and C. S. Chen, "Hidden Markov Model Approach to Skill Learning and Its Application to Telerobotics," IEEE transactions on robotics and automation, vol. 10, no. 5, pp. 621-631, 1994.
    G. E. Hovland, P. Sikka, and B. J. McCarragher, "Skill Acquisition from Human Demonstration Using a Hidden Markov Model," in Proceedings of IEEE international conference on robotics and automation, Minneapolis, MN, USA, 22-28 April 1996, vol. 3, pp. 2706-2711.
    M. G. Kim, S. H. Lee, and I. H. Suh, "Learning of Social Skills for Human-Robot Interaction by Hierarchical HMM and Interaction Dynamics," in 2014 International Conference on Electronics, Information and Communications (ICEIC), Kota Kinabalu, Malaysia, 15-18 January 2014, pp. 1-2.
    Y. Wu, Y. Chien, W. Wang, C. Hsu, and C. Lu, "A YOLO-Based Method on the Segmentation and Recognition of Chinese Words," in Proceedings of the International Conference on System Science and Engineering, New Taipei City, Taiwan, 28-30 June 2018, pp. 28-30.
    F. Boniardi, T. Caselitz, R. Kümmerle, and W. Burgard, "Robust LiDAR-Based Localization in Architectural Floor Plans," in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24-28 September 2017, pp. 3318-3324.
    C. Debeunne and D. Vivet, "A Review of Visual-LiDAR Fusion Based Simultaneous Localization and Mapping," Sensors, vol. 20, no. 7, p. 2068, 2020.
    R. Muñoz-Salinas, M. J. Marín-Jimenez, E. Yeguas-Bolivar, and R. Medina-Carnicer, "Mapping and Localization from Planar Markers," Pattern Recognition, vol. 73, pp. 158-171, 2018.
    S. Roos-Hoefgeest, I. A. Garcia, and R. C. Gonzalez, "Mobile Robot Localization in Industrial Environments Using a Ring of Cameras and ArUco Markers," in IECON 2021–47th Annual Conference of the IEEE Industrial Electronics Society, virtual conference, 13-16 October 2021, pp. 1-6.
    H. Xing et al., "A Multi-Sensor Fusion Self-Localization System of a Miniature Underwater Robot in Structured and GPS-Denied Environments," IEEE Sensors Journal, vol. 21, no. 23, pp. 27136-27146, 2021.
    D. Szaloki, N. Koszo, K. Csorba, and G. Tevesz, "Marker Localization with A Multi-Camera System," in 2013 International Conference on System Science and Engineering (ICSSE), Budapest, Hungary, 4-6 July 2013, doi: 10.1109/icsse.2013.6614647. [Online]. Available: https://dx.doi.org/10.1109/icsse.2013.6614647
    M. Yang and D. Wu. "Chinese Knowledge and Information Processing Lab." https://ckip.iis.sinica.edu.tw/ (accessed Jul. 13, 2022).

    無法下載圖示 電子全文延後公開
    2026/07/01
    QR CODE