簡易檢索 / 詳目顯示

研究生: 胡雅雯
Hu, Ya-Wen
論文名稱: 基於深度學習之視覺式即時室內健身輔助系統
A Vision-Based Real-time Indoor Fitness Assistance System Developed by Deep Learning Method
指導教授: 方瓊瑤
Fang, Chiung-Yao
口試委員: 陳世旺
Chen, Sei-Wang
方瓊瑤
Fang, Chiung-Yao
黃仲誼
Huang, Chung-I
羅安鈞
Luo, An-Chun
許之凡
Hsu, Chih-Fan
口試日期: 2022/06/30
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 65
中文關鍵詞: 運動科技智慧健身輔助系統人體動作辨識神經網路時域位移深度學習重複次數計算後期推理即時系統
英文關鍵詞: sports technology, smart fitness assistance system, neural network for human action recognition, temporal shift, deep learning, repetition counting, post-inference, real-time system
研究方法: 實驗設計法比較研究觀察研究
DOI URL: http://doi.org/10.6345/NTNU202201000
論文種類: 學術論文
相關次數: 點閱:159下載:54
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來人們的健康意識抬頭,越來越多人開始重視規律健身習慣的養成,而進行健身運動中的肌力鍛鍊時,記錄健身的內容是避免受傷及追求進步的重要途徑。因此本研究提出一種基於深度學習之視覺式即時室內健身輔助系統,能夠辨識健身動作並進行重複次數的計算,目的在於協助使用者在不需接觸式設備的情況下更加便利的自動記錄健身內容。
    視覺式即時室內健身輔助系統可分為三個模組,分別為健身動作辨識模組、動作重複次數計算模組及推理校正模組。本研究使用較適用於行動裝置的改良版Temporal Shift Module來進行健身動作辨識,並利用健身動作辨識神經網路所擷取的特徵圖進行動作重複次數計算,藉由訊號過濾、訊號選擇及峰值過濾演算法篩選出合適的特徵值變化訊號與峰值。最後將動作辨識結果利用短期推理分數和長期校正分數推理校正,使其具備穩定性同時保有使用者對於動作更換的敏感度。當使用者更換動作時,重複次數計算模組的結果將被重置,並輸出整合後的結果。
    本研究進行實驗的肌力鍛鍊動作共有25種,分別為一般深蹲、相撲深蹲、分腿蹲、前跨步蹲、後跨步蹲、羅馬尼亞硬舉、臀推、橋式、單腳橋式、跪姿伏地挺身、伏地挺身、臥推、仰臥飛鳥、俯身划船、反向飛鳥、肩推、前平舉、側平舉、肱二頭肌彎舉、肱三頭肌伸展、俄羅斯轉體、仰臥起坐、捲腹、交叉捲腹及仰臥抬腿。實驗結果顯示使用CVIU Fitness 25 Dataset在動作辨識的Top 1準確率為90.8%。動作重複次數計算之平均絕對誤差MAEn為8.36%,次數相對誤差MREc為2.55%,在總次數5004次中計次總差異量為183次。系統執行速率約為13 FPS。

    In recent years, people's health awareness has risen, and more and more people have begun to pay attention to the development of regular fitness habits. Recording the content of fitness when performing muscle strength exercises is an important way to avoid injuries and pursue progress. Therefore, this study proposed a vision-based real-time indoor fitness assistance system based on deep learning, which can identify fitness movements and calculate the number of repetitions, with the purpose of assisting users to record fitness content more conveniently and automatically without contact device.
    The vision-based real-time indoor fitness assistance system can be divided into three modules, namely fitness action recognition module, action repetition counting module, and inference correction module. In this study, an improved version of the Temporal Shift Module, which is more suitable for mobile devices, is used for fitness action recognition, and the feature maps captured by the fitness action recognition neural network is used to calculate the number of action repetitions. Filter out suitable feature value change signals and peaks through signal filtering(SF), signal choice(SC), and peak filtering(PF). Finally, the result of the action recognition module is corrected by short-term inference score and long-term correction score, which make the system stable and maintain its sensitivity to the user's changing actions. The results of the repetition counting module will be reset when the user changes the action, and the integrated result is output.
    A total of 25 muscle strength exercises were experimented in this study, including Squat, Sumo Squat, Split Squat, Lunge, Reverse Lunge, Romanian Deadlift, Hip Thrust, Bridge, Single Leg Bridge, Knee Push Ups, Push Ups, Bench Press, Chest Fly, Bent-Over Row, Reverse Fly, Shoulder Press, Front Raise, Lateral Raise, Biceps Curl, Triceps Extension, Russian Twists, Sit Ups, Crunch, Bicycle, and Leg Raise. The experimental results show that the Top 1 accuracy of fitness action recognition using CVIU Fitness 25 Dataset is 90.8%. The Mean Absolute Error in number of miscounted videos (MAEn) of action repetition counting was 8.36%, the Mean Relative Error of count (MREc) was 2.55%, and the total difference was 183 times in the total number of 5004 times. The system execution rate is about 13 FPS.

    第1章 緒論 1 第一節 研究動機與目的 1 A. 運動科技 4 B. 健身趨勢 5 第二節 研究困難與限制 7 第三節 研究貢獻 9 第四節 論文架構 9 第2章 文獻探討 10 第一節 智慧健身輔助系統 10 第二節 人體動作辨識技術 12 第三節 動作重複次數計算 16 第3章 視覺式即時健身輔助系統 18 第一節 系統流程 18 第二節 健身動作辨識模組 19 第三節 健身動作重複次數計算模組 20 第四節 推理校正模組 24 第4章 實驗結果與討論 28 第一節 實驗環境與資料集建立 28 第二節 實驗設計之健身動作 30 第三節 健身動作辨識模組分析 33 第四節 健身動作重複次數計算模組分析 39 第五節 推理校正模組分析 46 第六節 多項動作聯合實驗分析 48 第5章 結論與未來工作 51 第一節 結論 51 第二節 未來工作 52 參考文獻 53 附錄A 59

    [Sal21] R. Sallis, D. R. Young, S. Y. Tartof, J. F. Sallis, J. Sall, Q. Li, G. N. Smith, and D. A. Cohen, “Physical Inactivity is Associated with A Higher Risk for Severe COVID-19 Outcomes: A Study in 48 440 Adult Patients,” British Journal of Sports Medicine, pp. 1099-1105, 2021.
    [Sun14] M. Sundholm, J. Cheng, B. Zhou, A. Sethi, and P. Lukowicz, “Smart-mat: Recognizing and Counting Gym Exercises with Low-cost Resistive Pressure Sensing Matrix,” Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14), pp. 373-382, 2014.
    [Cha08] R. Chaudhri, J. Lester, and G. Borriello, “An RFID Based System for Monitoring Free Weight Exercises,” Proceedings of 6th ACM Conference Embedded Networked Sensor Systems, pp. 431-432, 2008.
    [Din15] H. Ding, L. Shangguan, Z. Yang, J. Han, Z. Zhou, P. Yang, W. Xi, and J. Zhao, “FEMO: A Platform for Free-weight Exercise Monitoring with RFIDs,” Proceedings of 13th ACM Conference Embedded Networked Sensor Systems, pp. 141-154, 2015.
    [Tho21] W. R. Thompson, “Worldwide Survey of Fitness Trends for 2021,” ACSM's Health & Fitness Journal, Vol. 25, No. 1, pp. 10-19, 2021.
    [Tho22] W. R. Thompson, “Worldwide Survey of Fitness Trends for 2022,” ACSM's Health & Fitness Journal, Vol. 26, No. 1, pp. 11-20, 2022.
    [Zho20] H. Zhou, Y. Gao, W. Liu, Y. Jiang, and W. Dong, “Posture Tracking Meets Fitness Coaching: A Two-Phase Optimization Approach with Wearable Devices,” Proceedings of IEEE 17th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), pp. 524-532, 2020.
    [Zha18] Z. Zhang, N. Wang, and L. Cui, “Fine-Fit: A Fine-grained Gym Exercises Recognition System,” Proceedings of Asia-Pacific Conference on Communications (APCC), pp. 492-497, 2018.
    [Akp18] E. A. H. Akpa, M. Fujiwara, Y. Arakawa, H. Suwa, and K. Yasumoto, “GIFT: Glove for Indoor Fitness Tracking System,” Proceedings of IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 52-57, 2018.
    [Das17] D. Das, S. M. Busetty, V. Bharti, and P. K. Hegde, “Strength Training: A Fitness Application for Indoor Based Exercise Recognition and Comfort Analysis,” Proceedings of IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1126-1129, 2017.
    [Zen21] Q. Zeng, J. Liu, D. Yang, Y. He, X. Sun, R. Li, and F. Wang, “Machine Learning Based Automatic Sport Event Detection and Counting,” Proceedings of IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC), pp. 16-20, 2021.
    [Che17] X. Cheng, M. He, and W. Duan, “Machine vision based physical fitness measurement with human posture recognition and skeletal data smoothing,” Proceedings of International Conference on Orange Technologies (ICOT), pp. 7-10, 2017.
    [Nag19] A. Nagarkoti, R. Teotia, A. K. Mahale, and P. K. Das, “Realtime Indoor Workout Analysis Using Machine Learning & Computer Vision,” Proceedings of Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1440-1443, 2019.
    [Khu18] R. Khurana, K. Ahuja, Z. Yu, J. Mankoff, C. Harrison, and M. Goel, “GymCam: Detecting, Recognizing, and Tracking Simultaneous Exercises in Unconstrained Scenes,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 2, No. 4, pp. 1-22, 2018.
    [Zei14] M. D. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks,” Proceedings of the European Conference on Computer Vision (ECCV), pp. 818-833, 2014.
    [Gha14] Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, “Two-Stream Convolutional Networks for Action Recognition in Videos,” Proceedings of the International Conference on Neural Information Processing Systems (NIPS), Vol. 1, pp. 568-576, 2014.
    [Tra15] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning Spatiotemporal Features with 3D Convolutional Networks,” Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 4489-4497, 2015.
    [Car17] J. Carreira and A. Zisserman, “Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724-4733, 2017.
    [Qiu17] Z. Qiu, T. Yao, and T. Mei, “Learning Spatio-Temporal Representation with Pseudo-3D Residual Networks,” Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 5533-5541, 2017.
    [Xie18] S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy, “Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification,” Proceedings of the European Conference on Computer Vision (ECCV), pp. 305-321, 2018.
    [Liu22] Z. Liu, J. Ning, Y. Cao, Y. Wei, Z. Zhang, S. Lin, and H. Hu, “Video Swin Transformer,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3202-3211, 2022.
    [Sun15] L. Sun, K. Jia, D. Yeung, and B. E. Shi, “Human Action Recognition Using Factorized Spatio-Temporal Convolutional Networks,” Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 4597-4605, 2015.
    [Zho18] B. Zhou, A. Andonian, A. Oliva, and A. Torralba, “Temporal Relational Reasoning in Videos,” Proceedings of the European Conference on Computer Vision (ECCV), pp. 803-818, 2018.
    [Zol18] M. Zolfaghari, K. Singh, and T. Brox, “Eco: Efficient Convolutional Network for Online Video Understanding,” Proceedings of the European Conference on Computer Vision (ECCV), pp. 695-712, 2018.
    [Lee18] M. Lee, S. Lee, S. Son, G. Park, and N. Kwak, “Motion feature network: Fixed motion filter for action recognition,” Proceedings of the European Conference on Computer Vision (ECCV), pp 392-408, 2018.
    [Tra18] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri, “A Closer Look at Spatiotemporal Convolutions for Action Recognition,” Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6450-6459, 2018.
    [Wan19] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. V. Gool, “Temporal Segment Networks for Action Recognition in Videos,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 41, No. 11, pp. 2740-2755, 2019.
    [Fei19] C. Feichtenhofer, H. Fan, J. Malik, and K. He, “SlowFast Networks for Video Recognition,” Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6202-6211, 2019.
    [Lin19] B. S. Lin, L. Y. Wang, Y. T. Hwang, P. Y. Chiang, and W. J. Chou, “Depth-Camera-Based System for Estimating Energy Expenditure of Physical Activities in Gyms,” IEEE Journal of Biomedical and Health Informatics, Vol. 23, No. 3, pp. 1086-1095, 2019.
    [Dwi20] D. Dwibedi, Y. Aytar, J. Tompson, P. Sermanet, and A. Zisserman, “Counting Out Time: Class Agnostic Video Repetition Counting in the Wild,” Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10384-10393, 2020.
    [Yin21] J. Yin, Y. Wu, C. Zhu, Z. Yin, H. Liu, Y. Dang, Z. Liu, and J. Liu, “Energy-Based Periodicity Mining With Deep Features for Action Repetition Counting in Unconstrained Videos,” IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), Vol. 31, No. 12, pp. 4812-4825, 2021.
    [Lev15] O. Levy and L. Wolf, “Live Repetition Counting,” Proceedings of International Conference on Computer Vision (ICCV), pp. 3020-3028, 2015.
    [1] Wikipedia。慢性非傳染性疾病:https://zh.wikipedia.org/wiki/%E6%85%A2%E6%80%A7%E9%9D%9E%E4%BC%A0%E6%9F%93%E6%80%A7%E7%96%BE%E7%97%85,2022年。
    [2] World Health Organization。World Health Statistics 2021:https://www.who.int/data/stories/world-health-statistics-2021-a-visual-summary,2022年。
    [3] 衛生福利部。109年國人死因統計結果:https://www.mohw.gov.tw/cp-16-61533-1.html,2021年。
    [4] 今健康。盡量別久坐!研究:長時間不運動 感染新冠肺炎死亡率增149%:https://gooddoctorweb.com/post/748,2021年。
    [5] 教育部體育署。中華民國 109 年運動統計:https://www.sa.gov.tw/Resource/Ebook/637499473534004656.pdf,p54,2020年。
    [6] 台灣趨勢研究。2021年健身房產業調查報告:https://www.twtrend.com/trend-detail/gymsurvey01/,2021年。
    [7] 衛生福利部國民健康署。健康體能:https://www.hpa.gov.tw/Pages/List.aspx?nodeid=333,2021年。
    [8] nuli。運動好久卻沒有成效?你需要知道這件事:https://nuli.app/blog/workoutschedule/,2021年。
    [9] Wikipedia。One-repetition maximum:https://en.wikipedia.org/wiki/One-repetition_maximum,2022年。
    [10] 3C新報。研究:智慧手錶的健康研究群體代表性不足:https://ccc.technews.tw/2022/04/27/smart-watch/,2022年。
    [11] Yanko Design。This Interactive Yoga Mat Turns Stretching Into A Game, Gives Users Daily Motivation To Exercise:https://www.yankodesign.com/2021/09/06/this-interactive-yoga-mat-turns-stretching-into-a-game-gives-users-daily-motivation-to-exercise/,2022年。
    [12] T客邦。曾被視為殺手級產品的 Kinect 體感裝置走入歷史,推出七年後微軟宣布停產:https://www.techbang.com/posts/54680-microsoft-announced-that-it-had-stopped-production-of-kinect-device-sold-by-guinness-world-records-as-the-worlds-fastest-growing-consumer-product-in-history,2022年。
    [13] UniAmerica。Dicas para gravar vídeos em casa usando o celular :https://uniamerica.br/blog/dicas-para-gravar-videos-em-casa-usando-o-celular,2022年。
    [14] Smart Mat:https://smartmat.com/about/,2022年。
    [15] Fitness Notes。人體七大肌群:https://tobyisme.gitbook.io/fitness-notes/ren-ti-qi-da-ji-qun,2022年。
    [16] 牛仔的保養廠。何謂人體七大肌群?肌力重要性?:https://cowboyrick0731.pixnet.net/blog/post/44733501,2022年。
    [17] Care Online。訓練核心六招,做好脊椎保健:https://www.careonline.com.tw/2018/09/core-muscle.html,2022年。
    [18] 男性肌肉結構器官:https://www.52112.com/pic/257018.html,2022年。
    [19] 早安健康。下半身,決定你的下半生,肌肉十大救命功能!:https://www.edh.tw/article/1811,2022年。
    [20] 元氣網。為何訓練核心肌群重要?身體動作的啟動源頭:https://health.udn.com/health/story/6033/2307923,2022年。
    [21] World Gym Blog。健身訓練前,請先學「呼吸練習」!:https://blog.worldgymtaiwan.com/sports-science-breathing-exercises,2022年。
    [22] Line Today。4個正確的健身呼吸方法,加強鍛煉節奏,增強訓練效果!:https://today.line.me/tw/v2/article/mEzyqm,2022年。

    下載圖示
    QR CODE