研究生: |
王逸禮 Wang, Yi-Li |
---|---|
論文名稱: |
板書教學內容擷取之研究 Automatic Lecture Note Extraction from Blackboard based Instruction Video |
指導教授: |
李忠謀
Lee, Chung-Mou |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2016 |
畢業學年度: | 104 |
語文別: | 中文 |
論文頁數: | 72 |
中文關鍵詞: | 教學影片 、教學筆記 、內容分析 、彩色教學筆記 |
英文關鍵詞: | instruction videos, lecture notes, content analysis, colorful notes |
DOI URL: | https://doi.org/10.6345/NTNU202203960 |
論文種類: | 學術論文 |
相關次數: | 點閱:174 下載:16 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在傳統的授課方式,教師在黑板上補充教學相關內容,學生將其謄寫入筆記,但是教師在授課時往往會因為移動而遮擋住筆記內容,造成學生無法專心聽講,而筆記的內容字跡潦草或東缺西漏,因此課後無法複習上課內容。
本研究著眼於板書完整字跡擷取,讓教學筆記具備清晰且接近老師實際上課的內容,並且尋找教學影片的片段分割點,將有意義的影像作為教學筆記,以利學生自行瀏覽影片,因此設計一套智慧型教學筆記擷取系統,能夠在不同教學環境擷取每部影片中的重要教學筆記。
主要研究方法是利用K-means Segmentation方法找出黑板範圍並去除黑板以外的資訊,接著即使老師身體擋住黑板字跡,也能有效地保留黑板的字跡而無缺漏;本研究採用Adaptive Threshold方法進行字跡擷取與過濾雜訊,繼而利用字跡擷取的結果進行相對應於原始畫面的字跡顏色,並且給予此粉筆字字跡進行著色;藉由統計黑板字跡變化量偵測教學影片適合的分割時機點,最後在每段片段中擷取擁有最完整字跡內容的畫面當作教學筆記。
本研究經由十部不同教學環境、不同老師和不同授課內容的教學影片,進行三組實驗測試並且與其它研究的技術相互比較,驗證本研究的可用性與實用性,從實驗證明本研究在不同的教學環境下,前景和背景分離、教學筆記擷取和彩色教學筆記部分皆有良好的效果。
Lecture in a blackboard equipped classroom is still very common in the K-12. The teacher usually write the important notes on the blackboard and students often try to copy all notes from the blackboard. However, as the teacher is usually between the students and the blackboard, teacher tends to block parts of blackboard. As a result, students note-taking cannot be in sync with the lecture. This study tried to remedy this note-taking difficulty by automatically extract notes on the blackboard throughout a lecture.
To extract the notes from the blackboard, the K-means clustering segmentation methods was first used to find the blackboard area. Adaptive threshold method was then used to extract written notes from the blackboard. Color analysis was done to preserve the original chalk color in the extracted notes. By using a simplified method for determining lecture break points, the system outputs the extracted notes at the lecture break points.
Experiments and analysis with 10 self-taped instruction videos showed that this system is able to achieve 90% character extraction. This system also compares favorably over system proposed in [34]. The experiments showed that even in the different teaching environment the system all have similar good results.
[1] (2002). 擷取自 暨南大學-語音網同步遠距教學系統: http://wsml.csie.nsnu.edu.tw/
[2] (2007). 擷取自 中山大學-網路大學: http://cu.nsysu.edu.tw/
[3] (2008). 擷取自 國立臺灣師範大學資訊教育系-影片輔助學習網站: http://elearning.ice.ntnu.edu.tw/
[4] (2015). 擷取自 Stanford online: http://online.stanford.edu/
[5] (2016). 擷取自 Lab color space: http://en.wikipedia.org/wiki/Lab_color_space
[6] (2016). 擷取自 Georgia Technology: http://www.cc.gatech.edu/
[7] Abutaleb, A., & Eloteifi, A. (1988). Automatic Thresholding of Gray-Level Pictures Using 2-D Entropy. 31st Annual Technical Symposium, 29-35.
[8] Bay, H., Tuytelaars, T. , & Gool, L. (2006). Surf: Speeded up robust features. Computer Vision–ECCV, 404-417.
[9] Berge, Z. (1997). Computer conferencing and the on-time classroom. International Journal of Education Telecommunications, 3-21.
[10] Bradley, D., & Roth, G. (2007). Adaptive Thresholding Using the Integral Image. the Journal of Graphics, 13-21.
[11] Bransford, J., Sherwood, R., Kinzer, C., & Hasselbring, T. (1985). Havens for learing: Toward a framework for developing effective uses of technology. ERIC Report Production Services.
[12] Brofferio, S. (1998). A university distance lesson system: experiments, services, and future developments. IEEE Transactions on Education, 17-24.
[13] Brown, M., & Lowe, D. (2007). Automatic panoramic image stitching using invariant features. International Journal of Computer Vision, 59-73.
[14] Bruning, M. (1992). VIS: Technology for multicultural teacher eduation. TechTrends, 13-14.
[15] Choudary, C., & Liu, T. (2006). Extracting content from instructional videos by statistical modelling and classification. Pattern Analysis and Applications, 69-81.
[16] Comaniciu, D., & Meer, P. (2002). Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 603-619.
[17] Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge University Press.
[18] He, L., & Zhang, Z. (2005). Real-time whiteboard capture and processing using a video camera for teleconferencing. IEEE International Conference on Acoustics, Speech, and Signal Processing, 1113-1116.
[19] Imran, A., & Cheikh, F. (2011). Blackboard content classification for lecture videos. 2011 18th IEEE International Conference on Image Processing, 2989-2992.
[20] Latchman, H., Salzmann, C., Gillet, D., & Bouzekri, H. (1999). Information technology enhanced learing in distance and conventional education. IEEE Transactions on Education, 247-254.
[21] Li, L., Huang, W., Gu, I., & Tian, Q. (2003). Foreground object detection from videos containing complex background. 2003 Proceedings of the eleventh ACM international conference on Multimedia, 2-10.
[22] Liu, T., & Choudary, C. (2006). Content extraction and summarization of instructional videos. 2006 International Conference on Image Processing, 149-152.
[23] LiuT., & ChoudaryC. (2007). Summarization of visual content in instructional videos. IEEE Transactions on Multimedia, 1443-1455.
[24] Lourakis, M. (2005). A Brief Description of the Levenberg-Marquardt Algorithm Implemened by levmar. Institute of Computer Science Foundation for Research and Technology - Hellas (FORTH).
[25] MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. Proceedings of the fifth Berkeley symposium on mathematical statistics and probabilit, 281-297.
[26] Okuni, S., Tsuruoka, S., Rayat, G., Kawanaka, H., & Shinogi, T. (2007). Video scene segmentation using the state recognition of blackboard for blended learning. Convergence Information Technology, 2437-2442.
[27] Onishi, M., Izumi, M., & Fukunaga, K. (2000). Blackboard segmentation using video image of lecture and its applications. IEEE Pattern Recognition, 615-618.
[28] Son, J., Bovik, A., Park, S., & Kim, K. (2007). A Convolution Kernel Method for Color Recognition. IEEE Computer Society, 22-24.
[29] Tuna, T., Subhlok, J., & Shah, S. (2011). Indexing and keyword search to ease navigation in lecture videos. IEEE Applied Imagery Pattern Recognition Workshop, 1-8.
[30] Varano, P., Casciola, G., & Sessione, I. (2008). Elaborazioni di Immagini con la Libreria OpenCV. , 27-32.
[31] Wang, F., Ngo, C., & Pong, T. (2007). Lecture video enhancement and editing by integrating posture, gesture, and text. IEEE Transactions on Multimedia, 397-409.
[32] Wienecke, M., Fink, G., & Sagerer, G. (2003). Towards automatic video-based whiteboard reading. IEEE Document Analysis and Recognition, 87-91.
[33] Yang, H., Siebert, M., Luhne, P., Sack, H., & Meinel, C. (2011). Automatic lecture video indexing using video OCR technology. IEEE International Symposium on Multimedia, 111-116.
[34] 陳映如. (民 102). 輔助傳統教學影片視頻分割與索引之研究. 國立台灣師範大學資訊工程研究所碩士論文.