研究生: |
林承憲 Lin, Cheng-Shian |
---|---|
論文名稱: |
資料增強神經網路之高效能異常圖像偵測系統 An Efficient Data Augmentation Network for Out-of-Distribution Image Detection |
指導教授: |
林政宏
Lin, Cheng-Hung |
學位類別: |
碩士 Master |
系所名稱: |
電機工程學系 Department of Electrical Engineering |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 中文 |
論文頁數: | 47 |
中文關鍵詞: | 深度神經網路 、資料增強 、分布外偵測 、異常偵測 、離群偵測 |
英文關鍵詞: | deep neural network, data augmentation, out-of-distribution detection, anomaly detection, outlier detection |
DOI URL: | http://doi.org/10.6345/NTNU202100209 |
論文種類: | 學術論文 |
相關次數: | 點閱:133 下載:19 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
神經網路可能會將分布外的異常資料以很高的信心分數分類為分布內資料之類別,在一些攸關生命安全的應用上例如:自動駕駛或醫療診斷上可能會帶來極高的風險。異常偵測也因此成為神經網路發展上非常重要的議題,一個好的神經網路除了能夠準確將與訓練資料分布相似的正常樣本進行分類,也要偵測出和訓練資料分布差異很大的異常樣本。
本論文提出一個基於資料增強神經網路之高效能異常圖像偵測系統,首先引入一組旋轉變換對樣本進行資料增強,將該組增強後的樣本送入神經網路,並利用聚合函數對該組強化樣本對應之預測機率進行運算後,得到信心分數,並由信心分數判斷輸入樣本是否為異常資料。與部分使用分布外圖像之方法不同,本文僅使用分部內圖像進行訓練,同時方便與各種神經網路結合,上述優勢使得本文提出之方法在實際應用上更具可行性。
實驗結果顯示此系統在多個視覺資料集上表現得比最新的分布外偵測演算法更好,當神經網路經過預訓練後,本系統可以進一步於大型的資料集上獲得效能提升。
Since deep neural networks may classify out-of-distribution image data into in-distribution classes with high confidence scores, this problem may cause serious or even fatal hazards in certain applications, such as autonomous vehicles and medical diagnosis. Therefore, out-of-distribution detection (also called anomaly detection or outlier detection) of image classification has become a critical issue for the successful development of neural networks. In other words, a successful neural network needs to be able to distinguish anomalous data that is significantly different from the data used in training. In this paper, we propose an efficient data augmentation network to detect out-of-distribution image data by introducing a set of common geometric operations into training and testing images. The output predicted probabilities of the augmented data are combined by an aggregation function to provide a confidence score to distinguish between in-distribution and out-of-distribution image data. Different from other approaches that use out-of-distribution image data for training networks, we only use in-distribution image data in the proposed data augmentation network. This advantage makes our approach more practical than other approaches, and can be easily applied to various neural networks to improve security in practical applications. The experimental results show that the proposed data augmentation network outperforms the state-of-the-art approach in various datasets. In addition, pre-training techniques can be integrated into the data augmentation network to make substantial improvements to large and complex data sets.
[1]A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Adv. Neur. Inf. Process. Syst. (NIPS), pp. 1097–1105.
[2]Z. Zhao, P. Zheng, S. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE Trans. on Neur. Net. and Learn. Syst., vol. 30, pp. 3212–3232, 2019.
[3]W. Rawat and Z. Wang, “Deep convolutional neural networks for image classification: A comprehensive review,” Neur. Comp., vol. 29, no. 9, pp. 2352–2449, 2017.
[4]M. Jamshidi, A. Lalbakhsh, J. Talla, Z. Peroutka, F. Hadjilooei, P. Lal-bakhsh, M. Jamshidi, L. L. Spada, M. Mirmozafari, M. Dehghani, A. Sa-bet, S. Roshani, S. Roshani, N. Bayat-Makou, B. Mohamadzade, Z. Malek, A. Jamshidi, S. Kiani, H. Hashemi-Dezaki, and W. Mohyuddin, “Artificial intelligence and covid-19: Deep learning approaches for diagnosis and treatment,” IEEE Access, vol. 8, pp. 109581–109595, 2020.
[5]S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A survey of deep learning techniques for autonomous driving,” Jour. of Field Robot., vol. 37, no. 3, pp. 362–386, 2020.
[6]R. Eide, D. Ciuonzo, P. Salvo Rossi, and H. Darvishi, “Sensor-fault detection, isolation and accommodation for digital twins via modular data-driven architecture,” IEEE Sensors Journal, 10 2020.
[7]J. Davis and M. Goadrich,“ The relationship between precsion-recall and roc curves,” in Proc. Int. Conf. Mac. Learn. (ICML), vol. 06, 06 2006.
[8]A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Master’s thesis, 2009.
[9]H. Pouransari and S. Ghili, “Tiny imagenet visual recognition challenge,” 2014.
[10]D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2017.
[11]V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proc. Int. Conf. Mac. Learn. (ICML), pp. 807–814, 2010.
[12]M. Hein, M. Andriushchenko, and J. Bitterwolf,“ Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 41–50, 2019.
[13]J. Bröcker and L. Smith, “Increasing the reliability of reliability diagrams,” Wea. and Forecast., vol. 22, 2007.
[14]C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” in Proc. Int. Conf. Mac. Learn. (ICML), vol. 70, pp. 1321–1330, 2017.
[15]T. DeVries and G. W. Taylor, “Learning confidence for out-of-distribution detection in neural networks.” arXiv:1802.04865, 2018.
[16]G. Shalev, Y. Adi, and J. Keshet, “Out-of-distribution detection using multiple semantic label representations,” in Proc. Adv. Neur. Inf. Process. Syst. (NIPS), p. 7386–7396, 2018.
[17]T. Denouden, R. Salay, K. Czarnecki, V. Abdelzad, B. Phan, and S. Vernekar, “Improving reconstruction autoencoder out-of-distribution detection with mahalanobis distance,” arXiv:1812.02765, 2018.
[18]K. Lee, K. Lee, H. Lee, and J. Shin, “A simple unified framework for detecting out-of-distribution samples and adversarial attacks,” in Proc. Adv. Neur. Inf. Process. Syst. (NIPS), vol. 31, pp. 7167–7177, 2018.
[19]S. Liang, Y. Li, and R. Srikant, “Enhancing the reliability of out-of-distribution image detection in neural networks,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2018.
[20]I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. Adv. Neur. Inf. Process. Syst. (NIPS), vol. 27, pp. 2672–2680, 2014.
[21]K. Lee, H. Lee, K. Lee, and J. Shin, “Training confidence-calibrated classifiers for detecting out-of-distribution samples,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2018.
[22]D. Hendrycks, M. Mazeika, and T. Dietterich, “Deep anomaly detection with outlier exposure,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2019.
[23]J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 248–255, 2009.
[24]D. Hendrycks, K. Lee, and M. Mazeika, “Using pre-training can improve model robustness and uncertainty,” in Proc. Int. Conf. Mac. Learn. (ICML), 2019.
[25]C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Jour. Big Data, vol. 6, p. 60, Jul 2019.
[26]J. Lin, “Divergence measures based on the shannon entropy,” IEEE Trans. Info. Theory, vol. 37, no. 1, pp. 145–151, 1991.
[27]Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS Workshop Deep Learning and Unsupervised Feature Learning, 2011.
[28]F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao, “Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop,” arXiv:1506.03365, 2016.
[29]M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proc. IEEE Conf. Comput. Vis. Pattern. Recognit. (CVPR), pp. 3606–3613, 2014.
[30]B. Zhou, A. Lapedriza, A. Torralba, and A. Oliva, “Places: An image database for deep scene understanding,” Jour. of Vis., vol. 17, 10 2016.
[31]T. Saito and M. Rehmsmeier, “The precision-recall plot is more informative than the roc plot when evaluating binary classifiers on imbalanced datasets,” PloS one, vol. 10, p. e0118432, 03 2015.
[32]S. Zagoruyko and N. Komodakis, “Wide residual networks,” in Proc. British Mach. Vis. Conf., pp. 87.1–87.12, September 2016.
[33]K. He, R. Girshick, and P. Dollar, “Rethinking imagenet pre-training,” in Proc. Int. Conf. Comput. Vis. (ICCV), pp. 4917–4926, 2019.