簡易檢索 / 詳目顯示

研究生: 張博翔
Chang, Po Hsiang
論文名稱: 用於光學相干斷層掃描之基於深度學習和聯邦學習框架之視網膜層分割技術
Retinal layer segmentation technology based on deep learning and federated learning framework for optical coherence tomography
指導教授: 呂成凱
Lu, Cheng-Kai
口試委員: 呂成凱
Lu, Cheng-Kai
連中岳
Lien, Chung-Yueh
林承鴻
Lin, Cheng-Hung
口試日期: 2024/07/15
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 74
中文關鍵詞: 視網膜層深度學習聯邦學習卷積神經網路
英文關鍵詞: Retinal Layer, Deep learning, Federated learning, Convolutional Neural Network
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202401377
論文種類: 學術論文
相關次數: 點閱:266下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在本研究中,我們提出了一種輕量級模型FPENet(α),以FPENet為基底,用於處理專為邊緣設備設計的 OCT 影像中視網膜層分割。視網膜層分割是眼科診斷的重要工具,但其在資源有限的邊緣設備上應用時存在計算成本和精度之間的瓶頸。FedLion(α)在使用 HCMS資料集、NR206資料集及OCT5K資料集進行訓練和測試時,實現了高精度和高效率。該模型經過最佳化,實現了精度和計算成本之間的平衡。FPENet(α)可以有效地捕捉不同尺度的特徵,同時大幅降低計算成本,非常適合部署在如Raspberry Pi等資源有限的邊緣設備上,其輕量化設計使其在計算資源和內存容量方面具有顯著優勢。聯邦學習的部分我們以FedLion為基礎添加了L2正則化與學習率遞減,提出FedLion(α),有效處理數據非獨立同分布的問題。數據顯示使用FPENet(α)與FedLion(α)進行聯邦學習,相較於原先只使用FPENet(α),在HCMS資料集平均DICE係數提升了0.7%,在NR206資料集提升了3.75%,在OCT5K資料集提升了9.1%。

    In this study, we propose a lightweight model, FPENet(α), based on FPENet, designed for retinal layer segmentation in OCT images specifically for edge devices. Retinal layer segmentation is a crucial tool for ophthalmic diagnosis, but it faces the challenge of balancing computational cost and accuracy when applied to resource-constrained edge devices. FedLion(α) achieves high accuracy and efficiency when trained and tested on the HCMS dataset, NR206 dataset, and OCT5K dataset. This model has been optimized to balance precision and computational cost.FPENet(α) can effectively capture multi-scale features while significantly reducing computational cost, making it highly suitable for deployment on resource-limited edge devices such as the Raspberry Pi. Its lightweight design provides significant advantages in terms of computational resources and memory capacity. In the federated learning part, we enhanced FedLion by incorporating L2 regularization and learning rate decay, creating FedLion(α), which effectively addresses the issue of non-IID data.Data shows that using FPENet(α) and FedLion(α) for federated learning, compared to using FPENet(α) alone, increases the average DICE coefficient by 0.7% on the HCMS dataset, by 3.75% on the NR206 dataset, and by 9.1% on the OCT5K dataset.

    誌 謝 i 目 錄 iv 表目錄 vii 圖目錄 viii 第一章 緒論 1 1.1 背景和研究動機 1 1.2 研究目的 2 1.3 論文架構 4 第二章 文獻探討 5 2.1 光學相干斷層掃描 5 2.2 語意分割(Semantic segmentation) 6 2.2.1語意分割模型技術 7 2.2.2 語意分割模型於OCT的近代應用 12 2.3 聯邦學習(Federated Learning, FL) 13 1. 橫向聯邦學習(Horizontal Federated Learning, HFL) 14 2. 縱向聯邦學習(Vertical Federated Learning, VFL) 15 3. 聯邦遷移學習(Federated Transfer Learning, FTL) 15 4. 聯邦強化學習(Federated Reinforcement Learning, FRL) 16 2.3.1 聯邦學習資料的非獨立同分布 16 2.3.2 聯邦學習架構與訓練流程 18 2.3.3 聯邦學習聚合算法 20 2.3.4 聯邦學習平台 23 第三章 研究方法設計 25 3.1 研究流程 25 3.2 資料集 28 3.2.1 HCMS Dataset 28 3.2.2 NR206 Dataset 29 3.2.3 OCT5K Dataset 30 3.3 FPENet網路模型架構 31 3.3.1 Encoder 層 32 3.3.2 Decoder 層 37 3.4 聯邦學習使用架構 40 3.5 評估指標 43 3.5.1 參數量(Parameters) 43 3.5.2 浮點數運算數(FLOPs) 43 3.5.3 運算強度(I) 44 第四章 實驗結果與討論 47 4.1 實驗說明 47 4.1.1 批次處理大小(Batch Size)說明 48 4.1.2 學習率(Learning Rate)說明 49 4.1.3 迭代(Number of Epochs)說明 49 4.1.4 優化器(Optimizer)說明 49 4.1.5 評估指標與損失函數(Loss Function)說明 49 4.2 實驗環境 50 4.3 研究結果 52 4.3.1 運算強度比較 53 4.3.2 HCMS dataset 55 4.3.3 NR206 dataset 57 4.3.4 OCT5K dataset 59 4.4 消融實驗 61 第五章 結論與未來展望 67 5.1 結論 67 5.2 未來展望 68 參考文獻 69 自 傳 73 學術成就 74

    [1] Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He, “Federated learning on non-iid data silos: An experimental study,” 2022 IEEE 38th International Conference on Data Engineering (ICDE), IEEE, pp. 965-978, 2022.
    [2] Jakub Konečný, H. Brendan McMahan, Daniel Ramage and Peter Richtárik, “Federated Optimization: Distributed Machine Learning for On-Device Intelligence,” 2016.
    [3] 王清泓,“光學同調斷層掃描(OCT)在青光眼診斷治療上的應用”,台灣健康刊物,2024。
    [4] A. K. Schuster, C. Erb, E. M. Hoffmann, T. Dietlein, and N. Pfeiffer, “The diagnosis and treatment of glaucoma,” Deutsches Ärzteblatt International, vol. 117, no. 13, p. 225, 2020.
    [5] Health management: OCT Portable Device, Health Management, 2024. https://healthmanagement.org/products/view/all/oct-ophthalmoscope-ophthalmic-examination-pachymeter-oct-pachymetry-portable-istand-optovue
    [6] “語意分割是什麼”,MathWorks, 2024.
    https://ww2.mathworks.cn/solutions/image-video-processing/semantic-segmentation.html
    [7] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3431-3440.
    [8] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III, Springer International Publishing, 2015, pp. 234-241.
    [9] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.
    [10] V. Badrinarayanan, A. Kendall, and R. C. SegNet, “SegNet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, pp. 2481-2495, 2017.
    [11] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, “ENet: A deep neural network architecture for real-time semantic segmentation,” arXiv preprint arXiv:1606.02147, 2016.
    [12] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2881-2890.
    [13] M. Liu and H. Yin, “Feature Pyramid Encoding Network for Real-time Semantic Segmentation,” in British Machine Vision Conference, 2019.
    [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, 2012.
    [15] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
    [16] C. Szegedy, et al., “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9.
    [17] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: Redesigning skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging, vol. 39, no. 6, pp. 1856-1867, 2019.
    [18] J. Cao, X. Liu, Y. Zhang, and M. Wang, “A multi-task framework for topology-guaranteed retinal layer segmentation in OCT images,” in 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Oct. 2020, pp. 3142-3147.
    [19] H. Wei and P. Peng, “The segmentation of retinal layer and fluid in SD-OCT images using mutex dice loss based fully convolutional networks,” IEEE Access, vol. 8, pp. 60929-60939, 2020.
    [20] B. Wang, W. Wei, S. Qiu, S. Wang, D. Li, and H. He, “Boundary aware U-Net for retinal layers segmentation in optical coherence tomography images,” IEEE J. Biomed. Health Inform., vol. 25, no. 8, pp. 3029-3040, 2021.
    [21] E. Parra-Mora and L. A. da Silva Cruz, “LOCTseg: A lightweight fully convolutional network for end-to-end optical coherence tomography segmentation,” Computers Biol. Med., vol. 150, 106174, 2022.
    [22] A. Cazañas-Gordón and L. A. da Silva Cruz, “Multiscale attention gated network (MAGNet) for retinal layer and macular cystoid edema segmentation,” IEEE Access, vol. 10, pp. 85905-85917, 2022.
    [23] X. He, et al., “Exploiting multi-granularity visual features for retinal layer segmentation in human eyes,” Frontiers Bioeng. Biotechnol., vol. 11, 1191803, 2023.
    [24] “DL、ML筆記(19):聯邦學習(Federated Learning),” Medium, 2021.
    [25] “聯邦學習(Federated learning): 模型之間的知識分享體系,” Medium, 2021.
    [26] H. Wang, Z. Kaplan, D. Niu, and B. Li, “Optimizing federated learning on non-IID data with reinforcement learning,” in IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, IEEE, July 2020, pp. 1698-1707.
    [27] H. Zhu, J. Xu, S. Liu, and Y. Jin, “Federated learning on non-IID data: A survey,” Neurocomputing, vol. 465, pp. 371-390, 2021.
    [28] P. Kairouz, et al., “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1-210, 2021.
    [29] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics, PMLR, April 2017, pp. 1273-1282.
    [30] D. Yin, Y. D. Chen, K. Ramchandran, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in Proc. of the 35th Int’l Conf. on Machine Learning, Stockholm: PMLR, 2018, pp. 5650–5659.
    [31] L. Lamport, “The weak Byzantine generals problem,” J. ACM, vol. 30, no. 3, pp. 668-676, 1983.
    [32] S. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konečný, S. Kumar, and H. B. McMahan, “Adaptive federated optimization,” arXiv preprint arXiv:2003.00295, 2020.
    [33] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. abs/1412.6980, 2014.
    [34] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in International Conference on Learning Representations, 2017.
    [35] Z. Tang and T. H. Chang, “FedLion: Faster adaptive federated optimization with fewer communication,” arXiv preprint arXiv:2402.09941, 2024.
    [36] X. Chen, C. Liang, D. Huang, E. Real, K. Wang, H. Pham, X. Dong, T. Luong, C.-J. Hsieh, Y. Lu, and Q. V. Le, “Symbolic discovery of optimization algorithms,” Advances in Neural Information Processing Systems, vol. 36, 2024.
    [37] A. Ziller, A. Trask, A. Lopardo, B. Szymkow, B. Wagner, E. Bluemke, J.-M. Nounahon, J. Passerat-Palmbach, K. Prakash, N. Rose, T. Ryffel, Z. N. Reza, and G. Kaissis, “PySyft: A library for easy federated learning,” in Federated Learning Systems: Towards Next-Generation AI, pp. 111-139, 2021.
    [38] PaddlePaddle/PaddleFL, “PaddleFL,” Jan. 2021. [Online]. Available: https://github.com/PaddlePaddle/PaddleFL
    [39] A. Beutel, P. Schmid, R. Roelofs, J. Dunnmon, F. Li, D. Golovin, S. Nowozin, E. Turner, and J. Solomon, “Flower: A friendly federated learning research framework,” 2020.
    [40] “Federated Learning: Collaborative Machine Learning without Centralized Training Data,” 2017. [Online]. Available: https://research.google/blog/federated-learning-collaborative-machine-learning-without-centralized-training-data/
    [41] “Webank’s AI: An Industrial Grade Federated Learning Framework,” 2019. [Online]. Available: https://fate.fedai.org/
    [42] “IBM Federated Learning,” [Online]. Available: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.8.x?topic=models-federated-learning
    [43] Y. He, A. Carass, S. D. Solomon, S. Saidha, P. A. Calabresi, and J. L. Prince, “Retinal layer parcellation of optical coherence tomography images: Data resource for multiple sclerosis and healthy controls,” Data Brief, vol. 22, pp. 601-604, Feb. 2019.
    [44] M. Arikan, J. Willoughby, S. Ongun, F. Sallo, A. Montesel, H. Ahmed, A. Hagag, M. Book, H. Faatz, M. V. Cicinelli, A. A. Fawzi, D. Podkowinski, M. Cilkova, D. de Almeida, M. Zouache, G. Ramsamy, W. Lilaonitkul, and A. M. Dubis, “OCT5k: A dataset of multi-disease and multi-graded annotations for retinal layers,” bioRxiv, 2023. [Online]. Available: https://rdr.ucl.ac.uk/articles/dataset/OCT5k_A_dataset_of_multi-disease_and_multi-graded_annotations_for_retinal_layers/22128671?file=44436359
    [45] W. Li, H. Chen, Q. Liu, H. Liu, Y. Wang, and G. Gui, “Attention mechanism and depthwise separable convolution aided 3DCNN for hyperspectral remote sensing image classification,” Remote Sensing, vol. 14, no. 9, p. 2215, 2022.
    [46] CSDN MobileNet v2中 Inverted Residual 與 Linear Bottleneck,2019。
    https://blog.csdn.net/flyfish1986/article/details/97017017
    [47] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Maaten, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510-4520.
    [48] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132-7141.
    [49] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu, “ECA-Net: Efficient channel attention for deep convolutional neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11534-11542.
    [50] 一文看盡深度學習中注意力機制,2023。
    https://www.cnblogs.com/wxkang/p/17133460.html
    [51] A. Sethi, “Interaction Between Modules in Learning Systems for Vision Applications,” 2006.
    [52] Q.-L. Zhang and Y.-B. Yang, “SA-Net: Shuffle attention for deep convolutional neural networks,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2021, pp. 2235-2239.
    [53] L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology, vol. 26, no. 3, pp. 297-302, 1945.

    無法下載圖示 電子全文延後公開
    2029/08/13
    QR CODE