簡易檢索 / 詳目顯示

研究生: 連堃玹
Lian, Kun-Syuan
論文名稱: 基於深度學習對籃球轉播影像之球場校正及球員追蹤
Court Calibration and Player Tracking Based on Deep Learning for Basketball Broadcast Video
指導教授: 賀耀華
Ho, Yao-Hua
口試委員: 吳齊人
Wu, Chi-Jen
王超
Wang, Chao
賀耀華
Ho, Yao-Hua
口試日期: 2024/07/25
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 49
中文關鍵詞: 籃球轉播影像球場校正球員追蹤軌跡生成深度學習
英文關鍵詞: Basketball broadcast images, Court calibration, Player tracking, Trajectory generation, Deep learning
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202401420
論文種類: 學術論文
相關次數: 點閱:184下載:8
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 許多球類競技運動使用視覺影像資料來識別戰術,並採用相對應的防守策略來應對,以最有效率的方式獲取分數。這些分析資訊採用的研究數據來源在於球員在球場上的位置變化,即軌跡資訊。通常仰賴人力透過逐幀的方式針對球隊的軌跡進行剖析,這往往需要耗費大量的時間與精力。此外,發展技術成熟的光學影像追蹤系統其背後所需要的器材成本及後續維護的費用使其難以普及使用。
    近年來,由於拍攝器材以及多媒體串流技術的進步,網路上有豐富且大量的轉播資訊提供獲取比賽資訊另一種途徑。因此,本研究基於籃球影像畫面提出了球員定位及追蹤軌跡方法 (Basketball Player Position Tracking Trajectory, BPT),基於轉播影像自動化生成球員在比賽過程中的軌跡資料。
    本研究所提出的BPT校正方法僅需使用籃球轉播系統的影像畫面作為輸入,即可生成雙方球隊在每次的攻防過程中的實際軌跡資訊,為後續的進階應用資訊分析提供重要的資訊來源。
    在BPT方法中,由轉播影像的球場校正方法與球員追蹤方法兩個模組所組成。在球場校正方法中,以三階段的深度模型任務實現端對端預測校正單應性矩陣。在球員追蹤方面,本研究基於追蹤演算法獲得初步的追蹤結果,通過BPT方法中的特徵模型提取更具鑑別度的球員特徵,結合貪婪合併軌跡的方式將片段的軌跡重新關聯,以達到更穩定的追蹤效果。
    實驗結果顯示,在球場校正準確性方面,採用交集比 (Intersection over Union, IoU) 評估校正的準確程度,在半場校正準確率高達到 87%。在球員追蹤的準確度採用高階追蹤準確率 (Higher Order Tracking Accuracy, HOTA) 評估多目標追蹤的成效。整體對球員的追蹤準確度可達 77%。根據使用情境,選擇適當的追蹤門檻值,最終採用最佳的追蹤演算法結合本研究的BPT方法,在球員追蹤準確率可高達 82%。

    Many ball sports utilize visual data to identify tactics and adopt corresponding defensive strategies to maximize the scoring of the game. Typically, these analyses are based on data such as players' positional changes on the court, known as trajectory information. This analysis often relies on manual frame-by-frame tracking of the trajectories, which is time-consuming and labor-intensive. Additionally, the advanced technology of optical image tracking systems requires expensive equipment with frequent maintenance and calibration, which limits the widespread use of such technologies.
    With recent years of advancements in video streaming technology, the abundant of broadcast video footage available online provides an alternative means to obtain game data. In this study, a Basketball Player Position Tracking Trajectory (BPT) method is proposed to automatically generate player trajectory data based on basketball broadcast footage available online.
    The proposed BPT method requires only the broadcast video footage from a basketball game as input to produce the actual trajectory information of both teams during each offense and defense sequence to generate critical data for subsequent advanced analysis.
    The BPT method comprises two modules: court calibration and player tracking. The court calibration method uses a three-stage deep learning model to achieve an end-to-end predictive calibration model. It sequentially obtains court contour information, regresses estimation reference values, and fine-tunes to obtain the optimal calibration homography matrix. The BPT method then extracts distinctive player features through a feature model and re-associates the initial trajectory segments using a greedy merging approach to achieve more stable tracking.
    To evaluate the performance of the proposed BPT method, Intersection over Union (IoU) is used for the accuracy of court calibration. Experimental results show that the accuracy of court calibration reaches 87% in half-court calibration. For player tracking accuracy, Higher Order Tracking Accuracy (HOTA) is used to evaluate the performance of the proposed BPT method. The overall tracking accuracy reaches 77%. Ultimately, combining the best tracking algorithm with the proposed BPT method and selecting appropriate tracking thresholds, in this study achieves a player tracking accuracy of up to 82%.

    第一章 緒論 1 1-1 研究背景與動機 1 1-2 研究目的 2 第二章 文獻探討 4 2-1 現有相關運動員追蹤系統 4 2-2 球場校正與球員追蹤 5 2-3 相機影像校正 8 2-4 目標偵測 9 2-5 多目標追蹤 10 第三章 研究方法 16 3-1 問題描述 16 3-2 系統概述 16 3-3 轉播影像球場校正 17 3-3-1 球場區域切割 18 3-3-2 參考點回歸 20 3-3-3 單應性微調 21 3-4 球員追蹤 23 3-4-1 前處理 24 3-4-2 合併追蹤軌跡 26 3-4-3 球員所屬球隊的分類 29 第四章 實驗與分析 30 4-1 實驗環境及相關設定 30 4-1-1 實驗環境設定 30 4-1-2 實驗資料集 30 4-2 模型評估方法 31 4-2-1 球場校正模型訓練設置 31 4-2-2 球員追蹤評估方法 33 4-2-3 球員所屬球隊分類結果 42 五、結論與未來展望 43 5-1 結論 43 5-2 未來展望 43 六、參考文獻 45

    [1] “Semi-automated offside technology.” Accessed: May 15, 2024. [Online]. Available: https://inside.fifa.com/origin1904-p.cxm.fifa.com/technical/football-technology/football-technologies-and-innovations-at-the-fifa-world-cup-2022/semi-automated-offside-technology
    [2] N. Mehrasa, Y. Zhong, F. Tung, L. Bornn, and G. Mori, “Learning Person Trajectory Representations for Team Activity Analysis,” Jun. 02, 2017, arXiv: arXiv:1706.00893. doi: 10.48550/arXiv.1706.00893.
    [3] N. Mehrasa, Y. Zhong, F. Tung, L. Bornn, and G. Mori, “Deep Learning of Player Trajectory Representations for Team Activity Analysis,” 2018.
    [4] T.-Y. Tsai, Y.-Y. Lin, H.-Y. M. Liao, and S.-K. Jeng, “Recognizing offensive tactics in broadcast basketball videos via key player detection,” in 2017 IEEE International Conference on Image Processing (ICIP), Sep. 2017, pp. 880–884. doi: 10.1109/ICIP.2017.8296407.
    [5] S. Hauri and S. Vucetic, “Group Activity Recognition in Basketball Tracking Data -- Neural Embeddings in Team Sports (NETS),” Aug. 30, 2022, arXiv: arXiv:2209.00451. doi: 10.48550/arXiv.2209.00451.
    [6] 方麒堯, 陳韋翰, and 相子元, “運動軌跡追蹤系統之發展與回顧,” 中華體育季刊, vol. 35, no. 2, pp. 125–136, Jun. 2021, doi: 10.6223/qcpe.202106_35(2).0006.
    [7] M.-C. Hu, M.-H. Chang, J.-L. Wu, and L. Chi, “Robust Camera Calibration and Player Tracking in Broadcast Basketball Video,” IEEE Trans. Multimed., vol. 13, no. 2, pp. 266–279, Apr. 2011, doi: 10.1109/TMM.2010.2100373.
    [8] P.-C. Wen, W.-C. Cheng, Y.-S. Wang, H.-K. Chu, N. C. Tang, and H.-Y. M. Liao, “Court Reconstruction for Camera Calibration in Broadcast Basketball Videos,” IEEE Trans. Vis. Comput. Graph., vol. 22, no. 5, pp. 1517–1526, May 2016, doi: 10.1109/TVCG.2015.2440236.
    [9] N. Zhang and E. Izquierdo, “A Fast and Effective Framework for Camera Calibration in Sport Videos,” in 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP), Feb. 2022, pp. 1–5. doi: 10.1109/VCIP56404.2022.10008882.
    [10] L. Sha, J. Hobbs, P. Felsen, X. Wei, P. Lucey, and S. Ganguly, “End-to-End Camera Calibration for Broadcast Videos,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA: IEEE, Jun. 2020, pp. 13624–13633. doi: 10.1109/CVPR42600.2020.01364.
    [11] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada: IEEE, Jun. 2023, pp. 7464–7475. doi: 10.1109/CVPR52729.2023.00721.
    [12] A. Arbués-Sangüesa, C. Ballester, and G. Haro, “Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion,” Jul. 10, 2019, arXiv: arXiv:1906.02042. doi: 10.48550/arXiv.1906.02042.
    [13] T. Feng, K. Ji, A. Bian, C. Liu, and J. Zhang, “Identifying players in broadcast videos using graph convolutional network,” Pattern Recognit., vol. 124, p. 108503, Apr. 2022, doi: 10.1016/j.patcog.2021.108503.
    [14] A. Senocak, T.-H. Oh, J. Kim, and I. S. Kweon, “Part-Based Player Identification Using Deep Convolutional Representation and Multi-scale Pooling,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA: IEEE, Jun. 2018, pp. 1813–18137. doi: 10.1109/CVPRW.2018.00225.
    [15] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple Online and Realtime Tracking,” in 2016 IEEE International Conference on Image Processing (ICIP), Sep. 2016, pp. 3464–3468. doi: 10.1109/ICIP.2016.7533003.
    [16] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in 2017 IEEE International Conference on Image Processing (ICIP), Sep. 2017, pp. 3645–3649. doi: 10.1109/ICIP.2017.8296962.
    [17] Y. Du et al., “StrongSORT: Make DeepSORT Great Again,” IEEE Trans. Multimed., vol. 25, pp. 8725–8737, 2023, doi: 10.1109/TMM.2023.3240881.
    [18] NirAharon, NirAharon/BoT-SORT. (Jul. 25, 2024). Jupyter Notebook. Accessed: Jul. 27, 2024. [Online]. Available: https://github.com/NirAharon/BoT-SORT
    [19] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, Feb. 2017, doi: 10.1109/TPAMI.2016.2644615.
    [20] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE, Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
    [21] M. Jaderberg, K. Simonyan, A. Zisserman, and koray kavukcuoglu, “Spatial Transformer Networks,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2015. Accessed: Jul. 27, 2024. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2015/hash/33ceb07bf4eeb3da587e268d663aba1a-Abstract.html
    [22] “CrowdHuman Dataset.” Accessed: Jul. 27, 2024. [Online]. Available: https://www.crowdhuman.org/
    [23] Y. Cui, C. Zeng, X. Zhao, Y. Yang, G. Wu, and L. Wang, “SportsMOT: A Large Multi-Object Tracking Dataset in Multiple Sports Scenes,” in 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France: IEEE, Oct. 2023, pp. 9887–9897. doi: 10.1109/ICCV51070.2023.00910.
    [24] K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-Scale Feature Learning for Person Re-Identification,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South): IEEE, Oct. 2019, pp. 3701–3711. doi: 10.1109/ICCV.2019.00380.
    [25] Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, and J. Kautz, “Joint Discriminative and Generative Learning for Person Re-Identification,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA: IEEE, Jun. 2019, pp. 2133–2142. doi: 10.1109/CVPR.2019.00224.
    [26] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 815–823. doi: 10.1109/CVPR.2015.7298682.
    [27] A. Maglo, A. Orcesi, J. Denize, and Q. C. Pham, “Individual Locating of Soccer Players from a Single Moving View,” Sensors, vol. 23, no. 18, Art. no. 18, Jan. 2023, doi: 10.3390/s23187938.
    [28] “GitHub - WongKinYiu/yolov7 at u7.” Accessed: Jul. 01, 2024. [Online]. Available: https://github.com/WongKinYiu/yolov7/tree/u7
    [29] M. Koshkina, H. Pidaparthy, and J. H. Elder, “Contrastive Learning for Sports Video: Unsupervised Player Classification,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA: IEEE, Jun. 2021, pp. 4523–4531. doi: 10.1109/CVPRW53098.2021.00510.
    [30] J. Luiten et al., “HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking,” Int. J. Comput. Vis., vol. 129, no. 2, pp. 548–578, Feb. 2021, doi: 10.1007/s11263-020-01375-2.
    [31] JonathonLuiten, JonathonLuiten/TrackEval. (Jun. 26, 2024). Python. Accessed: Jun. 29, 2024. [Online]. Available: https://github.com/JonathonLuiten/TrackEval

    下載圖示
    QR CODE