簡易檢索 / 詳目顯示

研究生: 曾雯琳
Tseng, Wen-Lin
論文名稱: 適用於陪伴型機器人與被陪伴者間互動之視覺式人體動作辨識系統
A Vision-based Human Action Recognition System for Companion Robots and Human Interactive
指導教授: 方瓊瑤
Fang, Chiung-Yao
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 114
中文關鍵詞: 人體動作辨識Kinect 2.0 for Xbox One深度影像彩色影像Support Vector Machine
英文關鍵詞: 3D depth motion map, fuzzy skin detection, socially acceptable manner
DOI URL: https://doi.org/10.6345/NTNU202202033
論文種類: 學術論文
相關次數: 點閱:147下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年家用陪伴型機器人銷售量逐漸增加,而且價格也有逐漸降低的趨勢,愈來愈多家庭能夠負擔家用陪伴型機器人的費用。而家用陪伴型機器人主要功能為協助家人或照護者陪伴與照護幼童及年長者生活。家用陪伴型機器人可以從了解幼童及年長者的行為與狀態,做出適當的相對之回應,以達到互動、陪伴與照護之功能。本研究開發一套適用於陪伴型機器人與被陪伴者間互動之視覺式人體動作辨識系統,能夠自動辨識被陪伴者之動作,達到陪伴與照護之效果。
    本系統開始時將讀入連續深度影像及連續彩色影像,接著判斷是否有人物在影像中,再利用深度影像建立depth motion map及彩色影像建立color motion map。將depth motion map與color motion map分別得到的影像合併成一張影像,將此影像擷取方向梯度直方圖(HOG)作為人體動作辨識系統之特徵。最後將這些特徵輸入SVM進行分類,得到人體動作辨識之結果。
    本研究的人體動作辨識共分8種動作,分別為揮右手、揮左手、握右手、握左手、擁抱、鞠躬、走路及拳擊。Database1實驗資料由5位實驗者拍攝影片,每位實驗者分別拍攝8個動作,每個動作各執行20次,共有800部影片,其中以640部影片做為訓練集,另以160部影片做為測試集。由實驗結果可得知,本系統之人體動作辨識正確率為88.7%。Database2實驗資料由1位實驗者拍攝影片,其中實驗者為12歲之孩童,共有320部影片,皆作為測試集,實驗結果得知此人體動作辨識正確率為74.37%。Database3實驗資料為機器人移動時拍攝人體動作,由4位實驗者拍攝影片,共有320部影片,其中以160部影片作為訓練集,另以160部影片作為測試集,實驗結果得知人體動作辨識正確率為51.25%。此可知本系統的辨識結果具有一定可信度。

    The companion robots can help people with special-care needs such as the elder, children, and the disabled. In recent years, demands and supplies of home companion robots are rapidly growing.
    In this study, a vision-based human action recognition system for companion robots and human interactive is developed. The aim is to produce a practical method for recognizing a set of commonly and socially acceptable behaviors.
    First, the Kinect 2.0 captures 3D-depth images and 2D-color images simultaneously by depth sensor and RGB camera. Second, the system uses 3D-depth motion map (3D-DMM) and the other color motion map as 4-dimensional sharp information based Histogram of Oriented Gradient (HOG) descriptor. In color-image processing, fuzzy skin detection method is used in body detection. Finally, the support vector machine (SVM) accepts HOG descriptor results. The SVM classifies the HOG descriptor and obtains the result of human action recognition.
    Experimental results show that the Database1 includs 800 sequences, the training data include 640 sequences, the testing data include 160 sequences, the average human action recognition accuracy rate of this study proposed method is 88.75 %. Database2 includs 320 sequences, the testing data include 320 sequences, the average human action recognition accuracy rate of this study proposed method is 74.37%. Database3 includs 320 sequences, the traning data include 160 sequences, the testing data include 160 sequences, the average human action recognition accuracy rate of this study proposed method is 51.25%.

    摘要 I Abstract II 致謝 III 目錄 IV 圖目錄 VI 表目錄 XII 第1章 緒論 1 第一節 研究動機 1 第二節 研究困難 4 第三節 論文架構 5 第2章 文獻探討 6 第一節 以深度影像/彩色影像為基礎的人體動作辨識系統分析 6 第二節 整合深度影像及彩色影像之輸入的人體動作辨識系統分析 16 第3章 人體動作辨識系統 19 第一節 系統目的 19 第二節 研究環境與設備 19 第三節 系統流程 22 第4章 人體動作特徵擷取與分類 27 第一節 影像前處理 27 第二節 Depth motion maps之建立 32 第三節 Color motion map之建立 37 第四節 擷取depth motion maps與color motion map之特徵 42 第五節 人體動作分類技術 44 第5章 實驗結果與討論 50 第一節 Database 1之人體動作辨識正確率 51 第二節 Database 2之人體動作辨識正確率 89 第三節 Database 3之人體動作辨識正確率 98 第6章 結論與未來工作 109 第一節 結論 109 第二節 未來工作 110 參考文獻 111

    [Bal12] L. Ballan, M. Bertini, A. D. Bimbo, L. Seidenari, and G. Serra, “Effective Codebooks for Human Action Representation and Classification in Unconstrained Videos, ” IEEE Transactions on Multimedia, vol. 14, no. 3, pp.1234-1245, 2012.
    [Bos92] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” ACM Proceedings of the fifth annual workshop on Computational learning theory, vol. 5, pp 144-152, 1992.
    [Cha11] C. C. Chang and C. J. Lin, “A Library for Support Vector Machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, Article 27, 2011.
    [Che16] C. Chen, R. Jafari, and N. Kehtarnavaz, “A Real-Time Human Action Recognition System Using Depth and Inertial Sensor Fusion,” ACM Journal of Real-Time Image Processing, vol. 12, no. 1, pp. 155-163, 2016.
    [Che16] C. Chen, K. Liu, and N. Kehtarnavaz, “Real-time human action recognition based on depth motion maps,” IEEE Sensors Journal, vol. 16, no. 3, pp. 773-781, 2016.
    [Che15] C. Chen, I. M. Guyon, and V. N. Vapnik, “Action Recognition from Depth Sequences Using Depth Motion Maps-Based Local Binary Patterns,” Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), USA, pp. 342–349, 2015.
    [Eve14] I. Everts, J. C. van Gemert, and T. Gevers T, “Evaluation of Color Spatio-Temporal Interest Points for Human Action Recognition,” IEEE Transactions on Image Processing, vol. 23, no. 4, pp1569-1580, 2014.
    [Hol11] M. B. Holte, T. B. Moeslund, N. Nikolaidis, and I. Pitas, “3D Human Action Recognition for Multi-View Camera Systems,” Proceedings of International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, Hangzhou, pp. 342–349, 2011.
    [Wu13] D. Wu and L. Shao, “Silhouette Analysis-Based Action Recognition Via Exploiting Human Poses,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 2, pp 236-242, 2013.
    [Yao14] B. Z. Yao, B. Nie, Z. Liu, and S. C. Zhu, “Animated Pose Templates for Modeling and Detecting Human Actions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 3, pp 436-452, 2014.
    [Ye13] G. Ye, Y. Liu, Y. Deng, N. Hasler, X. Ji, Q. Dai, and C. Theobalt, “Free-Viewpoint Video of Human Actors Using Multiple Handheld Kinects,” IEEE Transactions on Cybernetics, vol. 43, no. 5, pp.1370-1382, 2013.
    [Zho15] Z. Zhou, F. Shi, and W. Wu, “Learning Spatial and Temporal Extents of Human Actions for Action Detection,” IEEE Transactions on Multimedia, vol. 17, no. 4, pp.512-525, 2015.

    [1] “World robotics 2015 executive summary,” Available at: http://www. worldrobotics.org/uploads/tx_zeifr/Executive_Summary__WR_2015_01.pdf, Accessed 2017.
    [2] “Zenbo,” Available at: http://www.limitlessiq.com/media/bettervideo/
    links/AZenbo.jpg, Accessed 2017.
    [3] “Ibotn,” Available at: https://bnextmedia.s3.hicloud.net.tw/image/alb
    um/2016-11/img-1480501079-52185.jpg, Accessed 2017.
    [4] “Toyota Kirobo Mini,” Available at: http://image.u-car.com.tw/article
    image_659672.jpg, Accessed 2017.
    [5] “Paro,” Available at: http://img.store.pchome.com.tw/~prod/M04083
    976/5802.jpg?pimg=static&P=1247148899, Accessed 2017.
    [6] “Pepper,” Available at: http://twimg.edgesuite.net/images/ReNews/201
    60720/640_c16853f07ee3234003178149dd02dc1c.jpg, Accessed 2017.
    [7] “KTH Human Action Dataset,” Available: http://serre-lab.clps.brown.ed
    u/resource/hmdb-a-large-human-motion-database/#overview, Accessed 2017.
    [8] “Weizmann Human Action Dataset,” Available: http://www.wisdom.we
    izmann.ac.il/~vision/SpaceTimeActions.html, Accessed 2017。
    [9] “MICC-UNIFI Surveillance dataset,” Available: https://www.micc.unIfi
    .it/resources/datasets/micc-people-counting/SpaceTimeActions.html, Accessed 2017.
    [10] “Hollywood2 Dataset,” Available: http://www.di.ens.fr/~laptev/actions/
    hollywood2/,Accessed 2017.
    [11] “University of Texas at Dallas Multimodal Human Action Dataset,” Available: http://www.utdallas.edu/~kehtar/UTD-MHAD.html. , Acessed 2017.
    [12] “Kinect2.0 for Xbox One,” Available: http://www.xbox.com/zh-TW/xbox-one/accessories/kinect , Acessed 2017.

    [13] “飆機器人-全向輪自走車,” Available: http://www.playrobot.com/robotics/664--email.html , Acessed 2017.

    [14] “A Library for Support Vector Machines,” Available: https://www.csie.ntu.edu.tw/~cjlin/libsvm/, Acessed 2017.
    [15] “中値濾波法,” Available: https://zh.wikipedia.org/wiki/%E4%B8%AD
    %E5%80%BC%E6%BB%A4%E6%B3%A2%E5%99%A8, Acessed 2017.
    [16] “YCbCr,” Available: https://zh.wikipedia.org/wiki/YCbCr, Acessed 2017.
    [17] “方向梯度直方圖,” Available: https://www.google.com.tw/search?q=hog+%E6%96%B9%E5%90%91%E6%A2%AF%E5%BA%A6%E7%9B%B4%E6%96%B9%E5%9C%96&source=lnms&tbm=isch&sa=X&ved=0ahUKEwj-hqnUhYDVA
    hWFmJQKHQFHBDUQ_AUICigB&biw=790&bih=444#imgrc=lOlYTmbkIYqKyM:, Acessed 2017.

    下載圖示
    QR CODE