簡易檢索 / 詳目顯示

研究生: 陳奕寧
Chen, Yi-Ning
論文名稱: 分析物件、場景、美學推薦照片濾鏡
Photo Filter Recommendation by Analyzing Objects, Scenes and Aesthetics
指導教授: 葉梅珍
Yeh, Mei-Chen
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 中文
論文頁數: 32
中文關鍵詞: 濾鏡照片內容卷積神經網路
英文關鍵詞: Photo Filter, Convolutional Neural Network, Photo Content
DOI URL: http://doi.org/10.6345/THE.NTNU.DCSIE.040.2018.B02
論文種類: 學術論文
相關次數: 點閱:131下載:15
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文旨在幫助社群媒體使用者節省大量的時間在選擇照片的濾鏡。由於濾鏡數量的增加,以及手機板面上的限制,要如何快速地選出適合的濾鏡成為一個問題。我們觀察社群媒體上的照片發現,特定的物件與場景會偏好特定的濾鏡,因此希望藉由照片的內容來推薦適合的濾鏡。在本研究中,我們從社群媒體Instagram上蒐集了大量套過濾鏡的照片作為訓練資料,藉由深度學習的技術,分析照片中出現的物件、所在的場景以及美學相關的屬性,建置出推薦照片濾鏡的類神經網路模型。我們在濾鏡推薦的資料集FACD上達到了Top-1 51.87%的準確度,以及從Instagram建立濾鏡資料集,可以讓後續相關的研究使用。

    This thesis aims to help web users save time on selecting photo filters. Due to the increasing number of photo filters and the limited display size of a mobile phone, filter selection has become an important problem. We observed from the social media sites that photos with specific objects and scenes would prefer certain filters. Therefore, we propose to recommend filters by analyzing the photo content. We collect 68,400 fil-tered photos from Instagram to be used as training data, and analyze the objects, scenes and aesthetics-related attributes from the photos through deep learning tech-niques. We develop a neural network model to recommend photo filters and build a filter photo data set from Instagram to facilitate future research. Experimental results using FACD show Top-1 51.87% accuracy.

    附表目錄 v 附圖目錄 vi 第一章 簡介 1 1.1 研究背景與動機 1 1.2 研究目的 1 第二章 文獻探討 5 第三章 方法與資料集 7 3.1 資料集 7 3.2 照片內容 13 3.2.1 物件 17 3.2.2 場景 18 3.2.3 美學屬性 19 3.3 濾鏡推薦網路架構 19 第四章 實驗 22 4.1 設置 22 4.1.1 Training 22 4.1.2 Testing 23 4.2 結果 23 第五章 結論與未來發展 29 參考文獻 30

    1. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Deep Residual Learn-ing for Image Recognition. arXiv:1512.03385, 2015.
    2. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie. Feature Pyramid Networks for Object Detection. arXiv:1612.03144, 2016.
    3. Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick. Mask R-CNN arXiv:1703.06870, 2017
    4. B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning Deep Features for Scene Recognition using Places Database. NIPS, 2014.
    5. Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, Piotr Dollár. Microsoft COCO: Common Objects in Context. ECCV, 2014.
    6. R. Girshick. Fast R-CNN. ICCV, 2015.
    7. Wei-Tse Sun, Ting-Hsuan Chao, Yin-Hsi Kuo, Winston H. Hsu. Photo Filter Recommendation by Category-Aware Aesthetic Learning. arXiv:1608.05339, 2016.
    8. S. Bakhshi, D. A. Shamma, L. Kennedy, and E. Gilbert. Why We Filter Our Photos and How It Impacts Engagement. AAAI, 2015.
    9. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Pro-cessing Systems, 2012, pp. 1097–1105.
    10. X. Lu, Z. Lin, H. Jin, J. Yang, and J. Z. Wang. Rapid. Rating pictorial aes-thetics using deep learning. ACM, 2014, pp. 457–466.
    11. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Identity Mappings in Deep Residual Networks. ECCV, 2016.
    12. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He. Aggre-gated Residual Transformations for Deep Neural Networks. CVPR, 2017.
    13. Yuheng Hu, Lydia Manikonda, Subbarao Kambhampati. What We Insta-gram: A First Analysis of Instagram Photo Content and User Types. AAAI,2015.
    14. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, Laurens van der Maaten. Exploring the Limits of Weakly Supervised Pretraining. arXiv:1805.00932 2018.
    15. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei. ImageNet: A large-scale hierarchical image database. CVPR, 2009.
    16. Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. ImageNet Classifica-tion with Deep Convolutional Neural Networks. NIPS, 2012.
    17. Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. CVPR, 2016.
    18. Christian Szegedy,Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dra-gomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich. Going Deeper with Convolutions. CVPR, 2014.
    19. Naila Murray, Luca Marchesotti, Florent Perronnin. AVA: A large-scale da-tabase for aesthetic visual analysis. CVPR, 2012.
    20. Hossein Talebi, Peyman Milanfar. NIMA: Neural Image Assessment. arXiv:1709.05424 2017.
    21. R. Datta, D. Joshi, J. Li, and J. Z. Wang. Studying aesthetics in photographic images using a computational approach. ECCV, 2006.
    22. Y. Luo and X. Tang. Photo and video quality evaluation: Focusing on the subject. ECCV, 2008.
    23. Y. Ke, X. Tang, and F. Jing. The design of high-level features for photo quality assessment. CVPR, 2006.
    24. Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alem. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. CVPR, 2016.
    25. G.E. Nasr, E.A. Badr, C. Joun. Cross Entropy Error Function in Neural Net-works: Forecasting Gasoline Demand. AAAI, 2002.
    26. Jian Ren, Xiaohui Shen, Zhe Lin, Radomír Mech, David J. Foran. Personal-ized Image Aesthetics. ICCV, 2017.

    下載圖示
    QR CODE