研究生: |
胡雯 Hu, Wen |
---|---|
論文名稱: |
小提琴姿勢變化即時偵測分析 Real-Time Violin Gesture Detection and Analysis |
指導教授: |
李忠謀
Lee, Chung-Mou |
口試委員: | 江政杰 簡培修 李忠謀 |
口試日期: | 2021/09/29 |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 中文 |
論文頁數: | 39 |
中文關鍵詞: | 人體姿態估計 、動作分析 、小提琴演奏姿勢 、教學系統 、即時回饋系統 |
英文關鍵詞: | Human Pose Estimation, Motion Analysis, Violin Playing Posture, Teaching System, Real-Time Feedback Systems |
DOI URL: | http://doi.org/10.6345/NTNU202101472 |
論文種類: | 學術論文 |
相關次數: | 點閱:109 下載:22 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
小提琴的音色雖然優美,但對於初學者而言,一開始的學習是既枯燥又乏味的,拉奏出的琴音也十分不悅耳,需要長時間練習,才能慢慢掌握到正確的演奏姿勢。小提琴教師除了在課堂中教導音樂知識,也會即時糾正學生錯誤的姿勢。然而,初學者在自我練習時,若無老師或陪練員在身旁指教引導,大多無法做到接近標準的動作,更難意識到自己的錯誤,一旦習慣使用錯誤的姿勢練習,不僅對提升演奏技巧造成阻礙,也容易加大肌肉跟骨骼的傷害。
本研究使用人體姿勢偵測方法進行小提琴姿勢正確度判定,使用攝影鏡頭拍攝記錄初學者練習時的狀態,以每秒30幀的取樣速率轉換成圖像,採用OpenPose開放式函式庫,在每張影像幀中擷取人體各部位的關節位置,偵測其人體骨架,同時計算人體關節角度,藉以判定小提琴拉奏姿勢,將這些資料進行滑動窗口的連續影像處理,系統結果呈現練習期間各種拉奏狀態出現的時間點,並依照狀態百分比給予評語。
透過聲音回饋語音播放錯誤之姿勢,讓學生可以在練習時即時得知需要修正之處,練習結束後能通過紀錄了解自身的拉奏狀態。在課堂外的練習時間,使用自動化系統減輕家長陪練的負擔,也能隨時通過紀錄了解孩子的拉奏狀態,同時,導師可查看學員練習紀錄做分析與判斷,進而對學生的學習給予指導,進行長期的規劃和調整。
Although violin has a beautiful tone, for novices, the learning at the beginning is both boring and tedious. The sound of the violin is also very unpleasant. It takes a long time to practice to slowly master the correct playing posture. In addition to teaching music knowledge in class, violin teachers will also immediately correct student’s wrong postures. However, when novices practice themselves, if there is no teacher or practice partner to teach and guide, most of them cannot achieve standard postures, and it is more difficult to realize their own mistakes. Once accustomed to using the wrong posture to practice, it will not only impede the improvement of playing skills, but also easily increase the damage to the muscles and bones.
This research uses the human pose estimation detection method to determine the correctness of the violin posture, use the camera to record the status of novices during practice, converts into image at a sampling rate of 30 frames per second, use OpenPose to capture the joint positions of various parts of the human body in each image frame, detect the human skeleton, and calculate the joint angle of the human body to determine the violin playing posture. These data are processed in a sliding window of continuous images. The system results show the time points of various states during the practice, and give comments according to the state percentage.
The wrong posture is playing by voice feedback, so that students can instantly know what needs to be corrected during the practice. After the practice, they can understand their own playing status through the record. Practice time outside the classroom, the automated system is used to reduce the burden of the parents to accompany the practice, and to understand the child's playing status through the record at any time. Furthermore, the teacher can view the student's practice record for analysis, to teach students and make long-term planning.
[1] Barczyk-Pawelec, K., Sipko, T., Demczuk-Włodarczyk, E., & Boczar, A. (2012). Anterioposterior spinal curvatures and magnitude of asymmetry in the trunk in musicians playing the violin compared with nonmusicians. Journal of Manipulative and Physiological Therapeutics, 35(4), 319–326, May.
[2] Blanco-Piñeiro, P., Díaz-Pereira, M. P., & Martínez, A. (2015). Common postural defects among music students. Journal of Bodywork and Movement Therapies, 19(3), 565–572, July.
[3] Blanco-Piñeiro, P., Díaz-Pereira, M. P., & Martínez, A. (2017). Musicians, postural quality and musculoskeletal health: A literature’s review. Journal of Bodywork and Movement Therapies, 21(1), 157–172, January.
[4] Brulin, D., Benezeth, Y., & Courtial, E. (2012). Posture recognition based on fuzzy logic for home monitoring of the elderly. Proceedings of the IEEE Transactions on Information Technology in Biomedicine, 16(5), 974–982, 13 July.
[5] Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1302–1310, Honolulu, HI, USA, 21-26 July.
[6] Chen, X., & Yuille, A. (2014). Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations. Proceedings of the 27th International Conference on Neural Information Processing Systems, 1, 1736–1742, Montreal Canada, 8-13 December.
[7] Fani, M., Vats, K., Dulhanty, C., Clausi, D. A., & Zelek, J. (2019). Pose-projected action recognition hourglass network (PARHN) in soccer. Proceedings of the 16th Conference on Computer and Robot Vision, 201–208, Kingston, QC, Canada, 29-31 May.
[8] Girshick, R. (2015). Fast R-CNN. Proceedings of the IEEE International Conference on Computer Vision, 1440–1448, Santiago, Chile, 7-13 December.
[9] Gkioxari, G., Hariharan, B., Girshick, R., & Malik, J. (2014). Using k-poselets for detecting people and localizing their keypoints. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 3582–3589, Columbus, OH, USA, 23-28 June.
[10] Go, R., & Aoki, Y. (2016). Flexible top-view human pose estimation for detection system via CNN. Proceedings of the IEEE 5th Global Conference on Consumer Electronics, 1–4, Kyoto, Japan, 11-14 October.
[11] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 770–778, Las Vegas, NV, USA, 27-30 June.
[12] Hou, Y., Yao, H., Li, H., & Sun, X. (2017). Dancing like a superstar: Action guidance based on pose estimation and conditional pose alignment. Proceedings of the International Conference on Image Processing, 1312–1316, Beijing, China , 17-20 September.
[13] Insafutdinov, E., Andriluka, M., Pishchulin, L., Tang, S., Levinkov, E., Andres, B., & Schiele, B. (2017). ArtTrack: Articulated multi-person tracking in the wild. Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, 1293–1301, Honolulu, HI, USA, 21-26 July.
[14] Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka, M., & Schiele, B. (2016). Deepercut: A deeper, stronger, and faster multi-person pose estimation model. Proceedings of the Europeon Conference on Computer Vision, 34–50, October.
[15] Jingtian, S., Xue, C., Yanan, L., & Jianwen, C. (2020). 2D Human Pose Estimation from Monocular Images: A Survey. Proceedings of the IEEE 3rd International Conference on Computer and Communication Engineering Technology, 111–121, Beijing, China, 14-16 August.
[16] Klein-Vogelbach, S., Lahme, A., Spirgi-Gantert, I., & Fernández del Pozo, F. (2010). Interpretación musical y postura corporal : un desafío para músicos, profesores, terapeutas y médicos. Published by Tres Cantos, Madrid : Akal.
[17] Konczak, J., Velden, H.vander, & Jaeger, L. (2009). Learning to Play the Violin: Motor Control by Freezing, Not Freeing Degrees of Freedom. Journal of Motor Behavior, 41(3), 243–252, May.
[18] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). AlexNet: ImageNet Classification with Deep Convolutional Neural Networks. Proceedings of the 25th International Conference on Neural Information Processing Systems, 1097–1105.
[19] Li, B., Dai, Y., Cheng, X., Chen, H., Lin, Y., & He, M. (2017). Skeleton boxes: Solving skeleton based action detection with a single deep convolutional neural network. Proceedings of the IEEE International Conference on Multimedia and Expo Workshops, 613–616, Hong Kong, China, 10-14 July.
[20] Li, C., Imeokparia, E., Ketzner, M., & Tsahai, T. (2019). Teaching the NAO robot to play a human-robot interactive game. Proceedings of the 6th Annual Conference on Computational Science and Computational Intelligence, 712–715, Las Vegas, NV, USA, 5-7 December.
[21] Liang, M., & Hu, Y. (2020). Application of Human Body Posture Recognition Technology in Robot Platform for Nursing Empty-Nesters. Proceedings of the 6th International Conference on Control, Automation and Robotics , 91–95, Singapore, 20-23 April.
[22] Liu, B. Y., Jen, Y. H., Sun, S. W., Su, L., & Chang, P. C. (2020). Multi-Modal Deep Learning-Based Violin Bowing Action Recognition. Proceedings of the IEEE International Conference on Consumer Electronics - Taiwan, 28-30 September, Taoyuan, Taiwan.
[23] Liu, S., Yin, Y., & Ostadabbas, S. (2019). In-Bed Pose Estimation: Deep Learning with Shallow Dataset. Proceedings of the IEEE Journal of Translational Engineering in Health and Medicine, 7, 14 January.
[24] Munea, T. L., Jembre, Y. Z., Weldegebriel, H. T., Chen, L., Huang, C., & Yang, C. (2020). The Progress of Human Pose Estimation: A Survey and Taxonomy of Models Applied in 2D Human Pose Estimation. IEEE Access, 8, 20 July.
[25] Myers, J., Lephart, S., Tsai, Y.-S., Sell, T., Smoliga, J., & Jolly, J. (2008). The role of upper torso and pelvis rotation in driving performance during the golf swing. Journal of Sports Sciences, 26(2), 181–188, 15 January.
[26] Neher, H., Vats, K., Wong, A., & Clausi, D. A. (2018). HyperStackNet: A hyper stacked hourglass deep convolutional neural network architecture for joint player and stick pose estimation in hockey. Proceedings of the 15th Conference on Computer and Robot Vision, 313–320, Toronto, ON, Canada, 8-10 May.
[27] Newell, A., Yang, K., & Deng, J. (2016). Stacked hourglass networks for human pose estimation. Proceedings of the European Conference on Computer Vision, 483–499, 17 September.
[28] Park, H. J., Baek, J. W., & Kim, J. H. (2020). Imagery based Parametric Classification of Correct and Incorrect Motion for Push-up Counter Using OpenPose. Proceedings of the IEEE 16th International Conference on Automation Science and Engineering, 1389–1394, Hong Kong, China, 20-21 August.
[29] Pascarelli, E. F., & Hsu, Y.-P. (2001). Understanding Work-Related Upper Extremity Disorders: Clinical Findings in 485 Computer Users, Musicians, and Others. Journal of Occupational Rehabilitation, 11(1), 1–21.
[30] Pishchulin, L., Insafutdinov, E., Tang, S., Andres, B., Andriluka, M., Gehler, P., & Schiele, B. (2016). DeepCut: Joint subset partition and labeling for multi person pose estimation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 4929–4937, Las Vegas, NV, USA, 27-30 June.
[31] Qiao, S., Wang, Y., & Li, J. (2017). Real-time human gesture grading based on OpenPose. Proceedings of the 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, 1–6, Shanghai, China, 14-16 October.
[32] Roset-Llobet, J., Rosines-Cubells, D., & Salo-Orfila, J. M. (2000). Identification of risk factors for musicians in Catalonia (Spain). Proceedings of the Medical Problems of Performing Artists, 15(4), 167–174, December.
[33]Steinmetz, A., Seidel, W., & Muche, B. (2010). Impairment of postural stabilization systems in musicians with playing-related musculoskeletal disorders. Journal of Manipulative and Physiological Therapeutics, 33(8), 603–611, October.
[34] Středa, A., Glücksmann, J., & Šusta, A. (1972). Morphological and functional changes in the spine of members of the Philharmonic orchestra. Cesko-Slovenska Radiologie, 26(6), 325–331, November.
[35] Thar, M. C., Winn, K. Z. N., & Funabiki, N. (2019). A Proposal of Yoga Pose Assessment Method Using Pose Detection for Self-Learning. Proceedings of the International Conference on Advanced Information Technologies, 137–142, Yangon, Myanmar, 6-7 November.
[36] Toshev, A., & Szegedy, C. (2013). DeepPose: Human Pose Estimation via Deep Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1653–1660, Columbus, OH, USA, 23-28 June.
[37] Van DerLinden, J., Schoonderwaldt, E., & Bird, J. (2009). Towards a real-time system for teaching novices correct violin bowing technique. Proceedings of the IEEE International Workshop on Haptic Audio Visual Environments and Games, 81–86, Lecco, Italy, 7-8 November.
[38] Van DerLinden, J., Schoonderwaldt, E., Bird, J., & Johnson, R. (2011). MusicJacket - Combining motion capture and vibrotactile feedback to teach violin bowing. Proceedings of the IEEE Transactions on Instrumentation and Measurement, 60(1), 104–113, January.
[39] Wei, S. E., Ramakrishna, V., Kanade, T., & Sheikh, Y. (2016). Convolutional pose machines. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 4724–4732, Las Vegas, NV, USA, 27-30 June.
[40] Yang, Y., & Ramanan, D. (2011). Articulated pose estimation with flexible mixtures-of-parts. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1385–1392, CO, USA, 20-25 June.
[41] 曾盛杰, 陳國威, 徐道昌(1986)。音樂演奏者之慢性傷害研究。復健醫學會雜誌,14,51-53。
[42] 黃輔棠(2000)。小提琴教學文集。全音樂出版社。
[43] 王景賢(2003)。小提琴問答集-給兒童的家長。世界文物出版社。
[44] 張鎮洲。阿瑪迪小提琴教本I。當代樂賞文化事業部。