研究生: |
李柏逸 |
---|---|
論文名稱: |
劇院照片建置自動化之研究 A study on Automatic Cinemagraph |
指導教授: | 葉梅珍 |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 中文 |
論文頁數: | 36 |
中文關鍵詞: | 劇院照片 、動態分析 |
英文關鍵詞: | Cinemagraph, Motion analysis |
論文種類: | 學術論文 |
相關次數: | 點閱:123 下載:6 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
劇院照片是2011年開始發展的一種新的照片型態,即一張照片中有某些區域是會動的,且該動態區域內影像的變化是連貫、合理、且能不斷重複的。手動製作劇院照片是一件費時耗工的工作,通常使用者必須拍攝好一段影片並利用影像處理軟體對影片中的每一張影像編修欲保留之動態區域,最後再將其合併。此外,如何選擇動態區域使得製作出的劇院照片更有趣、更吸引人亦是一個問題。現今對自動化建置劇院照片的研究之中,大多的方法會先找出影片中所有動態區域,進而讓使用者決定要保留哪一部分動態區域。本論文提出的方法著重於動態區域的選擇,以計算的方法自動篩選動態區域,選擇較為吸引人注意且使用者會感興趣的一塊區域。我們提出一個全自動化建置劇院照片的方法,讓劇院照片的製作更為簡易方便。使用者只須拍攝好影片即可製作出一張劇院照片。實驗結果顯示,我們提出的方法所選擇的動態遮罩區域大部分符合一般使用者的觀點。
Cinemagraph was presented in 2011, which is a new type of media that contains one or a few dynamic regions presented in a continuous, seamless and looping manner. Manually creating cinemagraphs is usually tedious, where a user is required to carefully select and edit frames and regions to produce an interesting cinemagraph. Moreover, the task of selecting good dynamic regions itself is challenge for end users. There exist a few cinemagraph rendering tools but most of them are semi-automatic, and a user has to label the dynamic regions in the process. In this paper, we present a fully automatic approach; in particular, we emphasis on a computational approach to determine a region that may highly likely include interesting moving patterns and receive users’ attentions in a video. The method has been tested on several videos and the experiments show good results.
[1]. Jamie Beck and Kevin Burg, “Cinemagraphs.”
[2]. James Tompkin, Fabrizio Pece,Kartic Subr, Jan Kautz, “Towards Moment Imagery: Automatic Cinemagraphs.” Conference on Visual Media Production(CVMP), 2011.
[3]. Neel Joshi, Sisil Metha, Steven Drucker, Eric Stollnitz, Hugues Hoppe, Matt Uyttendaele, Michael Cohen, “Cliplets: Juxtaposing Still and Dynamic Imagery.” Microsoft Technical Report, 2012.
[4]. ZhouWang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: from error visibility to structural similarity.” IEEE Transactions on Image Processing, 13(4):600 –612, April 2004.
[5]. Arno Schödl, Richard Szeliski, David H. Salesin, and Irfan Essa, “Video textures.” In Proceedings of the 27th annual conference on Computer graphics and interactive techniques(SIGGRAPH ’00), pages 489–498, New York, NY, USA, 2000. [6]. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision.” In Proceedings of Imaging Understanding Workshop, pages 121—130, 1981.
[7]. B.K.P.Horn and B.G.Schunck, ”Determining Optical Flow”, Artificial Intelligence, Vol. 17. pp. 185-203. 1981.
[8]. T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, “Learning to Detect A Salient Object.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007.
[9]. Stas Goferman, Lihi Zelnik-Manor, and Ayellet Tal, “Context-Aware Saliency Detection.” IEEE Computer Vision and Pattern Recognition(CVPR) 2010. [10]. Wonjun Kim, Chanho Jung, Changick Kim, “Spatiotemporal Saliency Detection and Its Applications in Static and Dynamic Scenes.” IEEE Transactions on Circuits and Systems for Video Technology, 2011. [11]. 葉素玲、李仁豪(2005)。選擇注意力:選空間或選物體?。應用心理研究,21,165-194。
[12]. C. H. Lampert, M. B. Blaschko, and T. Hofmann, “Beyond sliding windows: Object localization by efficient subwindow search.” In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[13]. Avery Lee, “Deshaker for VirtualDub.”
[14]. E. Pogalin, A.W.M. Smeulders, A.H.C. Thean, “Visual Quasi-Periodicity.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR),2008
[15]. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features.” In Proceedings IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2001. [16]. Y. Ke, R. Sukthankar and M. Hebert, “Efficient Visual Event Detection Using Volumetric Features,” In Proceedings International Conference on Computer Vision, pp. 166-173, 2005.