簡易檢索 / 詳目顯示

研究生: 胡景閎
Hu, Jing-Hung
論文名稱: 基於雙重注意力機制之視網膜血管分割深度學習網路
Deep Retinal Vessel Segmentation Network based on Dual Attention Mechanism
指導教授: 康立威
Kang, Li-Wei
口試委員: 康立威
Kang, Li-Wei
李曉祺
Li, Hsiao-Chi
賴穎暉
Lai, Ying-Hui
口試日期: 2024/07/22
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 46
中文關鍵詞: 視網膜血管分割注意力機制編碼器解碼器架構深度學習
英文關鍵詞: retinal vessel segmentation, attention model, U-Net, encoder-decoder architecture, deep learning
DOI URL: http://doi.org/10.6345/NTNU202401362
論文種類: 學術論文
相關次數: 點閱:158下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 眼底影像之血管分割可以用來協助眼睛病灶的觀察,以提早發現病灶並進行治療,例如黃斑部病變、糖尿病視網膜病變、青光眼等等。由於眼底影像的採集會經過各種不同的程序而導致影像有不同的品質變化,眼底影像血管分割的精確度會影響病灶的判斷,儘管現今已存在許多影像分割方法,但是具有病灶的眼底圖像血管分支變化多端,現存各種分割方法的精確度也依舊無法達到完美,本研究目的為提出改良式眼底影像的血管分割方法,針對各種視網膜圖像,進行精確血管分割,以協助醫師對眼疾病變的診斷,期能對眼疾醫療做出微薄的貢獻。準確的血管分割是一項具有挑戰性的任務,主要是因為眼底影像的對比度低以及血管形態結構的複雜性,傳統卷積會增加乘法的數量,同時執行卷積操作,導致與細長且對比度低的血管相關信息損失。為了解決現有方法在血管提取時低敏感度以及信息損失的問題,本研究提出結合兩種注意力模型EPA以及DLA的並行注意力U-Net以實現準確的血管分割,EPA聚焦於空間以及通道的特徵提取,而DLA則專注於多尺度的局部特徵以及邊緣檢測的特徵,再將並行所得特徵進行深度和淺層特徵融合。本研究在DRIVE數據集上進行實驗,以驗證模型性能,研究結果指出,採用並行運算的U-Net模型分割視網膜血管具有競爭性效能。

    Retinal vessel segmentation is a key step for the early diagnosis of fundus diseases, such as macular degeneration, diabetic retinopathy, and glaucoma. However, the accuracy of vessel segmentation is affected by variations in image quality due to different acquisition procedures. Deep learning-based retinal vessel segmentation has been shown to achieve feasible performance than traditional methods. However, most deep learning-based methods may still suffer from insufficiently capturing global and local features simultaneously from fundus images. This may degrade the segmentation performance, resulting in infeasible diagnosis for fundus diseases. To solve this problem, this thesis introduces the PADU-Net, a parallel attention-based dual U-Net architecture for retinal vessel segmentation. The key is to integrate two parallel U-Net modules, i.e., encoder-decoder architectures, equipped with local and global attention modules, respectively, used for extracting local and global features. Then the features are decoded and fused for generating the segmentation map for the input fundus image. The experiments conducted on the well-known dataset, DRIVE (digital retinal images for vessel extraction), has verified the performance of the proposed framework, outperforming the state-of-the-art methods.

    誌謝 i 中文摘要 ii Abstract iii 目錄 iv 表目錄 vi 圖目錄 vii 第一章 緒論 1 1.1 研究動機與背景 1 1.2 研究目的與方法概述 2 1.3 論文架構 3 第二章 相關研究 4 2.1 醫學影像之卷積神經網路結構 5 2.2 基於注意力機制進行醫學影像分割相關方法 14 第三章 研究內容與方法 23 3.1 PADU-Net架構 23 3.2 高效率全局注意力模塊(EPA) 25 3.3 並行局部注意力模塊(DLA)以及深、淺層特徵融合 27 3.4 損失函數 29 第四章 實驗結果與分析 31 4.1 實驗使用配置 31 4.1.1 硬體配置 31 4.1.2 軟體配置 31 4.1.3 訓練設定 32 4.1.4 訓練用資料集 33 4.2 實驗結果分析 34 4.2.1 評估指標 34 4.2.2 現有方法比較 36 4.3 消融測試實驗結果 38 第五章 結論 40 參考文獻 41 自  傳 45 Publication List 46

    [1] M.J. Burton, J. Ramke, A.P. Marques, R.R.A. Bourne, N. Congdon, I. Jones, et al., The lancet global health commission on global eye health: vision beyond 2020, Lancet Global Health 9 (4) (2021) e489–e551, https://doi.org/10.1016/s2214- 109x(20)30488-5.
    [2] Ronneberger, O., Fischer, P., Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science(), vol 9351. Springer, Cham. https://doi.org/10.1007/978-3-319-24574-4_28
    [3] X. Xiao, S. Lian, Z. Luo and S. Li, "Weighted Res-UNet for High-Quality Retina Vessel Segmentation," 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 2018, pp. 327-331, doi: 10.1109/ITME.2018.00080.
    [4] Cai S, Tian Y, Lui H, Zeng H, Wu Y, Chen G. Dense-UNet: a novel multiphoton in vivo cellular image segmentation model based on a convolutional neural network. Quant Imaging Med Surg. 2020 Jun;10(6):1275-1285. doi: 10.21037/qims-19-1090. PMID: 32550136; PMCID: PMC7276369.
    [5] OKTAY, Ozan, et al. Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999, 2018.
    [6] ZHOU, Zongwei, et al. Unet++: A nested u-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4. Springer International Publishing, 2018. p. 3-11.
    [7] Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong Yang, Andriy Myronenko, Bennett Landman, Holger R. Roth, Daguang Xu; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 574-584
    [8] Y. Li, Y. Zhang, W. Cui, B. Lei, X. Kuang and T. Zhang, "Dual Encoder-Based Dynamic-Channel Graph Convolutional Network With Edge Enhancement for Retinal Vessel Segmentation," in IEEE Transactions on Medical Imaging, vol. 41, no. 8, pp. 1975-1989, Aug. 2022, doi: 10.1109/TMI.2022.3151666.
    [9] X. Zijian et al., "AFFD-Net: A Dual-Decoder Network Based on Attention-Enhancing and Feature Fusion for Retinal Vessel Segmentation," in IEEE Access, vol. 11, pp. 45871-45887, 2023, doi: 10.1109/ACCESS.2023.3273597.
    [10] JHA, Debesh, et al. Doubleu-net: A deep convolutional neural network for medical image segmentation. In: 2020 IEEE 33rd International symposium on computer-based medical systems (CBMS). IEEE, 2020. p. 558-564.
    [11] Abd-Ellah, M.K., Khalaf, A.A.M., Awad, A.I., Hamed, H.F.A. (2019). TPUAR-Net: Two Parallel U-Net with Asymmetric Residual-Based Deep Convolutional Neural Network for Brain Tumor Segmentation. In: Karray, F., Campilho, A., Yu, A. (eds) Image Analysis and Recognition. ICIAR 2019. Lecture Notes in Computer Science(), vol 11663. Springer, Cham. https://doi.org/10.1007/978-3-030-27272-2_9
    [12] Fangfang Dong, Dengyang Wu, Chenying Guo, Shuting Zhang, Bailin Yang, Xiangyang Gong, Crauet: a cascaded residual attention u-net for retinal vessel segmentation, Comput. Biol. Med. 147 (2022) 105651, http://dx.doi.org/10. 1016/j.compbiomed.2022.105651.
    [13] HU, Jie; SHEN, Li; SUN, Gang. Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. p. 7132-7141.
    [14] A. G. Roy, N. Navab and C. Wachinger, "Recalibrating Fully Convolutional Networks With Spatial and Channel “Squeeze and Excitation” Blocks," in IEEE Transactions on Medical Imaging, vol. 38, no. 2, pp. 540-549, Feb. 2019, doi: 10.1109/TMI.2018.2867261.
    [15] HUANG, Guoheng, et al. Channel-attention U-Net: Channel attention mechanism for semantic segmentation of esophagus and esophageal cancer. IEEE Access, 2020, 8: 122798-122810.
    [16] C. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen and C. Fan, "SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation," 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 2021, pp. 1236-1242, doi: 10.1109/ICPR48806.2021.9413346.
    [17] Alimanov, Alnur, and Md Baharul Islam. "Retinal image restoration and vessel segmentation using modified cycle-cbam and cbam-unet." 2022 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, 2022.
    [18] WOO, Sanghyun, et al. Cbam: Convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV). 2018. p. 3-19.
    [19] Sun, Guanqun, et al. "DA-TransUNet: integrating spatial and channel dual attention with transformer U-net for medical image segmentation." Frontiers in Bioengineering and Biotechnology 12 (2024): 1398237.Diganta Misra, Trikay Nalamada, Ajay Uppili Arasanipalai, Qibin Hou; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 3139-3148
    [20] Wu, Yanlin, et al. "Triplet attention fusion module: A concise and efficient channel attention module for medical image segmentation." Biomedical Signal Processing and Control 82 (2023): 104515.
    [21] SHAKER, Abdelrahman, et al. UNETR++: delving into efficient and accurate 3D medical image segmentation. arXiv preprint arXiv:2212.04497, 2022.
    [22] Y. Li et al., "Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation," in IEEE Transactions on Cybernetics, vol. 53, no. 9, pp. 5826-5839, Sept. 2023, doi: 10.1109/TCYB.2022.3194099.
    [23] JIANG, Yun, et al. Retinal vessels segmentation based on dilated multi-scale convolutional neural network. Ieee Access, 2019, 7: 76342-76352.
    [24] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and ` B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE transactions on medical imaging, vol. 23, no. 4, pp. 501–509, 2004.
    [25] HU, Kai, et al. Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function. Neurocomputing, 2018, 309: 179-191.
    [26] SOOMRO, Toufique A., et al. Strided U-Net model: Retinal vessels segmentation using dice loss. In: 2018 Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2018. p. 1-8.
    [27] Yeung M, Sala E, Schönlieb CB, Rundo L. Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation. Comput Med Imaging Graph. 2022 Jan;95:102026. doi: 10.1016/j.compmedimag.2021.102026. Epub 2021 Dec 13. PMID: 34953431; PMCID: PMC8785124.
    [28] LOSHCHILOV, Ilya; HUTTER, Frank. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
    [29] Wang, B., Qiu, S., He, H. (2019). Dual Encoding U-Net for Retinal Vessel Segmentation. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11764. Springer, Cham. https://doi.org/10.1007/978-3-030-32239-7_10
    [30] BUDAK, Ümit, et al. DCCMED-Net: Densely connected and concatenated multi Encoder-Decoder CNNs for retinal vessel extraction from fundus images. Medical Hypotheses, 2020, 134: 109426.
    [31] Yang, Y., Wan, W., Huang, S. et al. RADCU-Net: residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation. Int. J. Mach. Learn. & Cyber. 14, 1605–1620 (2023). https://doi.org/10.1007/s13042-022-01715-3

    下載圖示
    QR CODE