研究生: |
洪銘鴻 Hong, Ming-Hong |
---|---|
論文名稱: |
基於邊緣計算和深度學習之病媒蚊分類系統 A Vector Mosquitoes Classification System Based on Edge Computing and Deep Learning |
指導教授: |
陳伶志
Chen, Ling-Jyh |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2019 |
畢業學年度: | 107 |
語文別: | 中文 |
論文頁數: | 50 |
中文關鍵詞: | 登革熱 、深度學習 、邊緣計算 、卷積神經網路 、影像處理 、電腦視覺 |
DOI URL: | http://doi.org/10.6345/NTNU202000442 |
論文種類: | 學術論文 |
相關次數: | 點閱:135 下載:19 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
由於登革熱與日本腦炎是由病毒所引起的一種傳染病,會經由蚊子傳播給人類。在最近一次 2015 年的台南市爆發登革熱的疫情,最初只出現在台南市北部地區,接著以驚人的速度擴散到全台南市,最終蔓延至台灣全島。當年,確診病例超過 4 萬人,死亡病例也高達 218 人,而且未發病的感染者約為發病者的九倍至十倍。若患者再次被病媒蚊叮咬造成交叉感染,則重症死亡率會大幅度提升至 20%以上,而且目前沒有預防疫苗,也沒有特效藥物可治療,而引發登革熱的病媒蚊為埃及斑蚊(Aedes aegypti)與白線斑蚊 (Aedes albopictus)。而日本腦炎的致死率大約為 20%以上,存活病例約有 40%有神經性或精神性的後遺症,而且亦目前沒有特效藥可治療,引發日本腦炎的病媒蚊為三斑家蚊(Culex tritaeniorhynchus)與環蚊家蚊(Culex annulus),避免病媒蚊叮咬是目前唯一的預防登革熱及日本腦炎的方法。
為解決登革熱與日本腦炎問題,本篇論文提出病媒蚊分類系統,這是一套影像分類準確率高達 98%以及計數功能的智慧捕蚊系統,其中包含邊緣計算、深度學習的影像處理和 電腦視覺,主要功能在邊緣計算為物體偵測,深度學習為斑蚊分類與計數,透過這些步驟,改善了現今捕蚊燈、滅蚊燈不能分類 (Classification)蚊子種類。並以智慧捕蚊裝置收集影像資料,主要資料收集與處理正是引發登革熱的兩種台灣常見的病媒蚊種類──白線斑蚊與埃及斑蚊以及引發日本腦炎的兩種台灣常見的病媒蚊種類──三斑家蚊與環蚊家蚊,並在分類時以斑蚊 (Aedes) 和家蚊 (Culex) 進行二元分類,由於此系統與裝置獲得更多台灣蚊子資訊,其資訊包含進入捕蚊燈的蚊子數量、種類以及時間、地點,以便後續作為對病媒蚊採取措施的重要參考依據。
Dengue fever and Japanese encephalitis are mosquito-borne diseases which are infectious diseases caused by viruses, they. It is particularly dangerous for children and can lead to death, less than 1 percent of cases cause fatalities even with proper medical care, according to the World Health Organization (WHO). However, dengue fever symptoms which may include a high fever, headache joint pains and muscle, and a skin rash. typically begin three days to two weeks after infection. In the most recent outbreak of dengue fever in Tainan City, Taiwan in 2015. It first appeared only in the northern part of Tainan City, then it spread to all over Tainan City at an alarming rate ,and eventually it spread to all over whole Taiwan islands. In that year, the number of confirmed cases exceeded 40,000, and the number of death reached 218. Actually, there are no vaccines for prevention and there are no specific drugs for treatment. However, the mosquitoes that cause dengue fever are Aedes aegypti and Aedes albopictus, and the mosquito that causes Japanese encephalitis are Culex tritaeniorhynchus and Culex annulus. The most importantly, avoiding vector mosquitoes bites is the only way to prevent dengue fever and Japanese encephalitis. In order to alleviate the problem of dengue fever and Japanese encephalitis, this paper proposes the vector mosquitoes classification system. This system is a intelligent mosquitocatching system with image classification accuracy of up to 98%. This system includes Edge Computing, Deep Learning Image Processing, and Computer Vision to improve the problem of classification in mosquito traps and mosquito killer lamp. The main data collection and processing are the two species of mosquitoes common in Taiwan causing Dengue fever: Aedes albopictus and Aedes aegypti, and two types of Taiwanese common mosquitoes that cause Japanese encephalitis. Culex tritaeniorhynchus and Culex annulus. In this paper, Aedes and Culex were used for binary classification. This system and device will obtain more information on mosquitoes in Taiwan, the information includes the number , type and time and place of vector mosquitoes. This can provide important information to take measures against vector mosquitoes.
[1] Versteirt, V., Nagy, Z. T., Roelants, P., Denis, L., Breman, F. C., Damiens, D., ... & Van Bortel, W. (2015). Identification of Belgian mosquito species (Diptera: Culicidae) by DNA barcoding. Molecular Ecology Resources, 15(2), 449-457.
[2] Chen, Y., Why, A., Batista, G., Mafra-Neto, A., & Keogh, E. (2014). Flying insect detection and classification with inexpensive sensors. Journal of visualized experiments: JoVE, (92).
[3] Ouyang, T. H., Yang, E. C., Jiang, J. A., & Lin, T. T. (2015). Mosquito vector monitoring system based on optical wingbeat classification. Computers and Electronics in Agriculture, 118, 47-55.
[4] Silva, D. F., Souza, V. M., Ellis, D. P., Keogh, E. J., & Batista, G. E. (2015). Exploring low cost laser sensors to identify flying insect species. Journal of Intelligent & Robotic Systems, 80(1), 313-330.
[5] Fuchida, M., Pathmakumar, T., Mohan, R. E., Tan, N., & Nakamura, A. (2017). Vision-based perception and classification of mosquitoes using support vector machine. Applied Sciences, 7(1), 51.
[6] Podpora, M., Korbas, G. P., & Kawala-Janik, A. (2014, October). YUV vs RGB-Choosing a Color Space for Human-Machine Interaction. In FedCSIS Position Papers (pp. 29-34).
[7] Elgammal, A., Harwood, D., & Davis, L. (2000, June). Non-parametric model for background subtraction. In European conference on computer vision (pp. 751-767). Springer, Berlin, Heidelberg.
[8] Guo, J. M., & Liu, Y. F. (2008). License plate localization and character segmentation with feedback self-learning and hybrid binarization techniques. IEEE Transactions on Vehicular Technology, 57(3), 1417-1424.
[9] Zivkovic, Z., & Van Der Heijden, F. (2006). Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern recognition letters, 27(7), 773-780.
[10] Bae, S. H., & Yoon, K. J. (2017). Confidence-based data association and discriminative deep appearance learning for robust online multi-object tracking. IEEE transactions on pattern analysis and machine intelligence.
[11] Berclaz, J., Fleuret, F., Turetken, E., & Fua, P. (2011). Multiple object tracking using k-shortest paths optimization. IEEE transactions on pattern analysis and machine intelligence, 33(9), 1806-1819.
[12] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems(pp. 91-99).
[13] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017, October). Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on (pp. 2980-2988). IEEE.
[14] Liao, S., Zhao, G., Kellokumpu, V., Pietikäinen, M., & Li, S. Z. (2010, June). Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on (pp. 1301-1306). IEEE.
[15] Patel, B., & Patel, N. (2012). Motion detection based on multi frame video under surveillance system. International Journal of Computer Science and Network Security (IJCSNS), 12(3), 100.
[16] Lu, N., Wang, J., Wu, Q. H., & Yang, L. (2008). An Improved Motion Detection Method for Real-Time Surveillance. IAENG International Journal of Computer Science, 35(1).
[17] Stauffer, C., & Grimson, W. E. L. (1999). Adaptive background mixture models for real-time tracking. In Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on. (Vol. 2, pp. 246-252). IEEE.
[18] Power, P. W., & Schoonees, J. A. (2002, November). Understanding background mixture models for foreground segmentation. In Proceedings image and vision computing New Zealand (Vol. 2002).
[19] Zhan, C., Duan, X., Xu, S., Song, Z., & Luo, M. (2007, August). An improved moving object detection algorithm based on frame difference and edge detection. In Image and Graphics, 2007. ICIG 2007. Fourth International Conference on (pp. 519-523). IEEE.
[20] Henriques, J. F., Caseiro, R., Martins, P., & Batista, J. (2015). High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 583-596.
[21] Tao, M., Bai, J., Kohli, P., & Paris, S. (2012, May). SimpleFlow: A Non‐iterative, Sublinear Optical Flow Algorithm. In Computer Graphics Forum (Vol. 31, No. 2pt1, pp. 345-353). Blackwell Publishing Ltd.
[22] Kalal, Z., Mikolajczyk, K., & Matas, J. (2012). Tracking-learning-detection. IEEE transactions on pattern analysis and machine intelligence, 34(7), 1409-1422.
[23] Hua, C., Wu, H., Chen, Q., & Wada, T. (2006). K-means Tracker: A General Algorithm for Tracking People. Journal of Multimedia, 1(4), 46-53.
[24] Kalal, Z., Mikolajczyk, K., & Matas, J. (2010, August). Forward-backward error: Automatic detection of tracking failures. In Pattern recognition (ICPR), 2010 20th international conference on (pp. 2756-2759). IEEE.
[25] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[26] Sharif Razavian, A., Azizpour, H., Sullivan, J., & Carlsson, S. (2014). CNN features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 806-813).
[27] Liang, M., & Hu, X. (2015). Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3367-3375).
[28] Hong, S., You, T., Kwak, S., & Han, B. (2015, June). Online tracking by learning discriminative saliency map with convolutional neural network. In International Conference on Machine Learning (pp. 597-606).\
[29] Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A. P., Bishop, R., ... & Wang, Z. (2016). Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1874-1883).
[30] Jiang, H., & Learned-Miller, E. (2017, May). Face detection with the faster R-CNN. In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on (pp. 650-657). IEEE.
[31] Zhu, C., Zheng, Y., Luu, K., & Savvides, M. (2017). CMS-RCNN: contextual multi-scale region-based CNN for unconstrained face detection. In Deep Learning for Biometrics (pp. 57-79). Springer, Cham.
[32] Sun, X., Wu, P., & Hoi, S. C. (2017). Face detection using deep learning: An improved faster rcnn approach. arXiv preprint arXiv:1701.08289.