簡易檢索 / 詳目顯示

研究生: 后玲
Hou, Ling
論文名稱: 視覺式耳穴診斷輔助系統
A Vision-Based Auricular Diagnosis Assistance System
指導教授: 方瓊瑤
Fang, Chiung-Yao
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 88
中文關鍵詞: 耳醫學說耳穴診斷耳穴位置語義分割神經網路深度學習疾病辨識視診陽性反應
英文關鍵詞: Otology theory, Auricular point diagnosis, Auricular point location, Semantic segmentation neural network, Deep learning, Disease identification, Positive diagnosis
DOI URL: http://doi.org/10.6345/NTNU202001037
論文種類: 學術論文
相關次數: 點閱:151下載:29
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於現代人工作繁忙,不太注意定期進行全身健檢的重要性。而有些疾病的初期病徵並不明顯,等到病徵變得明顯時,常常為時已晚。因此,若能開發醫學相關的診斷輔助系統讓一般人隨時隨地都能做身體健康的初步檢測,即可減少因疾病發現得太晚的遺憾。同時,醫學相關的診斷輔助系統可提供醫師病人進行複檢的建議,減少醫療資源的浪費。所以本研究擬開發一套醫學相關的診斷輔助系統,即視覺式耳穴診斷輔助系統,以期達到上述的目的。
    視覺式耳穴診斷輔助系統使用耳朵影像進行疾病辨識。本系統可分為二個部分,第一部分為視診陽性反應區域偵測,第二部分為相關疾病辨識。當耳朵影像輸入至視診陽性反應區域偵測系統後,會先經過語義分割神經網路偵測出耳朵影像中視診陽性反應區域的位置。本研究所使用的語義分割神經網路為 U-Net 架構的改良版,針對 U-Net 原型架構進行了批量標準化、空洞卷積、調降卷積層數和整合各卷積核膨脹率等改良。接著將語義分割結果輸入至疾病辨識系統,辨識出輸入影像是否顯示出系統已知疾病。
    本研究辨識的疾病共有九種,分別為肝炎、乳腺炎、子宮頸炎、前列腺炎、前額痛、偏頭痛、後腦杓痛、頭頂痛以及全頭痛。使用的資料庫為作者親自拍攝收集,並命名為 CVIU 108 EAR Dataset。實驗顯示使用 CVIU 108 EAR Dataset 進行訓練後本系統之疾病辨識正確率為 97.22%,IoU 為 84.71%。上述結果顯示本研究所提出之視覺式診斷輔助系統具其有效性。

    Modern people do not pay much attention to the importance of regular health examination because of busy work. This study develops a vision-based auricular diagnosis assistance system for people to do a basic health examination anytime and anywhere. At the same time, the medical-related diagnosis assistance system assists physicians in determining what kind of medical examinations patients need to do and saves medical resources.
    The proposed vision-based auricular diagnosis assistance system is divided into two parts in the visual auricular diagnosis. Firstly, ear images are input to detect positive diagnosis area using a semantic segmentation neural network. The semantic segmentation neural network used in this study is an improved version of the U-Net architecture. The improved U-Net architecture contains the batch standardization, atrous convolution, convolution layer reduction, and the multi-expansion-rate integration. Secondly, the results of positive diagnosis areas are used to diagnose diseases.
    Nine types of diseases are identified in this study, including hepatitis, mastitis, cervicitis, prostatitis, frontal headache, migraine, occipital headache, vertex headache, and headache. The dataset used in this study was collected by the author and named as CVIU 108 EAR Dataset. The experimental result shows that the disease recognition accuracy rate of this system is 97.22% and IoU is 84.71% after using CVIU 108 EAR Dataset for training.

    誌謝 I 摘要 II Abstract III 目錄 IV 表目錄 VI 圖目錄 VII 第 1 章 緒論 1 第一節 耳醫學說 1 第二節 研究動機與目的 3 第三節 視診陽性反應的類型 6 第四節 研究限制 10 第五節 研究貢獻 11 第六節 論文架構 12 第 2 章 文獻探討 13 第一節 耳朵與耳穴的介紹 13 第二節 耳朵偵測 15 第三節 語義分割神經網路 18 第四節 U-Net 與相關改良 30 第 3 章 視覺式耳穴診斷輔助系統 37 第一節 系統流程 37 第二節 視診陽性反應區域偵測 39 第三節 系統改良 45 第 4 章 實驗結果與討論 51 第一節 研究設備 51 第二節 資料庫建立 52 第三節 系統架構改良分析 57 第四節 卷積核膨脹率選擇分析 68 第五節 整合各卷積核膨脹率選擇分析 71 第六節 疾病辨識正確率 74 第七節 疾病辨識正確率討論與訓練時間 76 第 5 章 結論與未來工作 78 第一節 結論 78 第二節 未來工作 79 參考文獻 80 附錄一 83 附錄二 87 附錄三 88

    [Cin17] C. Cintas, M. Quinto-Sánchez, V. Acuña, C. Paschetta, S. de Azevedo, C. C. S. de Cerqueira, V. Ramallo, C. Gallo, G. Poletti, M. C. Bortolini, S. Canizales-Quinteros, F. Rothhammer, G. Bedoya, A. Ruiz-Linares, R. Gonzalez-José, and C. Delrieux, “Automatic Ear Detection and Feature Extraction Using Geometric Morphometrics and Convolutional Neural Networks,” IET Biometrics, vol. 6, no. 3, pp. 211-223, April 2017.
    [Sar16] M. Saranya, G. L. Infant Cyril, and R. R. Santhosh, “An Approach Towards Ear Feature Extraction for Human Identification,” Proceedings of 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), pp. 4824-4828, Chennai, 2016.
    [Lon15] J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431-3440, Boston, 2015.
    [Noh15] H. Noh, S. Hong, and B. Han, “Learning Deconvolution Network for Semantic Segmentation,” Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 1520-1528, Las Condes, 2015.
    [Ron15] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 234-241, Munich, 2015.
    [Yu17] F. Yu, V. Koltun, and T. Funkhouser, “Dilated Residual Networks,” Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 636-644, Honolulu, 2017.
    [He16] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, Las Vegas, 2016.
    [Che15] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs,” Proceedings of 2015 International Conference on Learning Representations (ICLR), pp. 1-14, San Diego, 2015.
    [Che17] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017.
    [Che17] L. C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking Atrous Convolution for Semantic Image Segmentation,” arXiv: 1706.05587, 2017.
    [Che18] L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” Proceedings in European Conference on Computer Vision (ECCV), pp. 801-818, Munich, 2018.
    [Lin16] G. Lin, A. Milan, C. Shen, and I. Reid, “RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation,” arXiv: 1611.06612, 2016.
    [Zha16] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,” arXiv: 1612.01105, 2016.
    [Rad18] R. M. Rad, P. Saeedi, J. Au, and J. Havelock, “Multi-Resolutional Ensemble of Stacked Dilated U-Net for Inner Cell Mass Segmentation in Human Embryonic Images,” Proceedings of 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 3518-3522, Athens, 2018.
    [Zho18] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A Nested U-Net Architecture for Medical Image Segmentation,” Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (DLMIA), pp. 3-11, Springer, 2018.
    [Igl18] V. Iglovikov and A. Shvets, “TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation,” arXiv:1801.05746, 2018.
    [Tan19] Y. Tang, F. Yang, S. Yuan, and C. Zhan, “A Multi-Stage Framework with Context Information Fusion Structure for Skin Lesion Segmentation,” Proceedings of 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1407-1410, Venice, 2019.
    [Xia18] X. Xiao, S. Lian, Z. Luo, and S. Li, “Weighted Res-UNet for High-Quality Retina Vessel Segmentation,” Proceedings of 2018 9th International Conference on Information Technology in Medicine and Education (ITME), pp. 327-331, Hangzhou, 2018.
    [Gon18] G. González, G. R. Washko, and R. S. J. Estépar, “Multi-Structure Segmentation from Partially Labeled Datasets. Application to Body Composition Measurements on CT Scans,” Proceedings of International workshop on Image Analysis for Moving Organ, Breast, and Thoracic Images, pp. 215-224, Granada, September 2018.
    [Iof15] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” Proceedings of The 32nd International Conference on Machine Learning (ICML), pp. 448-456, Lille, 2015.
    [黃17] 黃麗春,耳穴診斷學,2017年。
    [1] 衛生福利部統計處,2018年死因統計,https://www.mohw.gov.tw/cp-16-48057-1.html,2019年。
    [2] ResNet-18網路結構,https://blog.csdn.net/sunqiande88/article/details/ 80100891。

    下載圖示
    QR CODE