研究生: |
林瑞硯 Lin Ruei Yan |
---|---|
論文名稱: |
使用網路攝影機即時人眼偵測與注視點分析 Real-Time Eye Detection and Gaze Estimation Using Low Resolution Webcam |
指導教授: |
李忠謀
Lee, Chung-Mou |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2011 |
畢業學年度: | 99 |
語文別: | 中文 |
論文頁數: | 44 |
中文關鍵詞: | 眼睛偵測 、眼睛追蹤 、注視點分析 |
英文關鍵詞: | Eye detection, Eye tracking, Gaze estimation |
論文種類: | 學術論文 |
相關次數: | 點閱:249 下載:35 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
數年來,眼睛偵測與注視點分析一直為學術或應用上的熱門研究主題,其原因為眼睛是人臉上最重要且顯著的部位。學術上常利用眼睛作為人臉偵測特徵,應用上則常用於影像追蹤,例如以眼睛代替滑鼠操作的眼控滑鼠、駕駛疲勞偵測或是近年熱門的裸視3D技術。
過往的方法多數利用侵入性的紅外線照射眼睛,亦或是利用昂貴的眼動儀輔助實驗,雖然可提高眼睛偵測或注視點分析辨識率,卻忽略了對人體的潛在傷害或是一般人無法輕易取得的缺點。
本論文提出一個使用低解析度的網路攝影機即時偵測眼睛與注視點分析方法,實現以低成本實驗器材達到正確偵測眼睛與注視點分析的目的,主要方法分成兩大部分,首先利用人臉偵測獲得人臉影像,利用光線濾波器過濾光線,並且結合鼻子位置實作角度權重機制,保留正確的眼睛區域,其次透過注視點校正程序,記錄使用者不同注視點位置的眼睛資訊,建構使用者當下環境的注視點模型,藉由比對模型以達到判斷注視點區塊。
Eye detection and gaze estimation play an important role in many applications, e.g., the eye-controlled mouse in the assisting system for disabled or elderly persons, eye fixation and saccade in psychological analysis, and iris recognition in the security system. However, traditional methodologies often employed intrusive infrared-based techniques or expensive eye tracker to achieve eye detection or gaze estimation, which is impractical for general applications.
In this paper, we propose a real-time eye-gaze estimation system by using a general low-resolution webcam, which estimate the eye-gaze accurately without expensive and specific equipments. A hybrid model combining the position criterion and an angle-based eye detection strategy is derived to locate the eyes more accurately than conventional methods. The appearance-based features which describe the eye and the iris compactly by the Fourier Descriptor are employed in eye-gaze estimation, which is carried out by the Support Vector Machine.
The proposed algorithms have low computational complexity but high performances for eye-gaze estimation. The experiment results also show the feasibility of the proposed methodology.
[1] M. Betke, J. Gips, and P. Fleming, “The camera mouse: Visual tracking of body features to provide computer access for people with severe disabilities,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 10, Issue 1, pp. 1 – 10, Mar. 2002.
[2] K. Sobottka and I. Pitas, “Segmentation and tracking of faces in color images,” Proceedings of the Second International Conference on Automatic Face and Gesture Recognition, pp. 236 – 241, Oct. 1996.
[3] Hamzah A., Fauzan A. and Noraisyah M.S., “Face localization for facial features extraction using a symmetrical filter and linear Hough transform,” Artificial Life and Robotics, vol.12, pp. 157 - 160. Apr. 2008.
[4] Viola P. and Jones M.J., "Rapid object detection using a boosted cascade of simple features," Proceedings of the IEEE Computer Society International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, Dec. 2001.
[5] Valenti R. and Gevers T., “Accurate eye center location and tracking using isophote curvature,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1 – 8, Jun. 2008.
[6] A. Yuille, P. Hallinan, and D. Cohen, “Feature extraction from faces using deformable templates,” Int’l J. Computer Vision, vol. 8, no. 2,pp. 99 - 111, Nov. 1992.
[7] 杜兆乘,“視訊中基於眉毛之眼睛追蹤與閉眼偵測”,台灣科技大學電子工程研究所碩士論文,民國96年
[8] K. Peng, L. Chen, S. Ruan, and G. Kukharev, “A Robust algorithm for eye detection on gray intensity face without spectacles,” in Journal of Computer Science & Technology, Vol. 5, no. 3, Oct. 2005.
[9] I. Fasel, B. Fortenberry, and J. Movellan, “A generative framework for real time object detection and classification,” in Computer Vision and Image Understanding - Special issue on eye detection and tracking, Vol. 98, Issue 1, pp. 182 – 210, Apr. 2005.
[10] D.W. Hansen and J.P. Hansen, “Robustifying eye interaction,” IEEE International Conference on Computer Vision and Pattern Recognition Workshop, pp. 152 - 159, Jun. 2006.
[11] G.C. Feng and P. C. Yuen, “Multi-cues eye detection on gray intensity image,” in Pattern Recognition, Vol. 34, Issue 5, pp. 1033 - 1046, May. 2006.
[12] E.D. Guestrin and M. Eizenman, “General theory of remote gaze estimation using the pupil center and corneal reflections,” IEEE Transactions on Biomedical Engineering, Vol. 53, no. 6, pp. 1124 - 1133, June 2006.
[13] T. Ohno, N. Mukawa, and A. Yoshikawa, “Freegaze: A gaze tracking system for everyday gaze interaction,” Proceedings of the Eye Tracking Research Application Symposium, pp. 125 - 132, Mar. 2002.
[14] S.W. Shih, Y.T. Wu, and J. Liu, “A calibration-free gaze tracking technique,” 15th International Conference on Pattern Recognition, Vol. 3, pp. 201 - 204, Sep. 2000.
[15] A. Villanueva and R. Cabeza, “Models for gaze tracking systems,” Journal on Image and Video Processing, vol. 2007, Issue 4, no. 3, Nov. 2007.
[16] R. Stiefelhagen, J. Yang, and A. Waibel, “Tracking eyes and monitoring eye gaze,” Proceedings of the Workshop on Perceptual User Interfaces, pp. 98 - 100, 1997.
[17] L.Q. Xu, D. Machin, and P. Sheppard, “A novel approach to real-time non-intrusive gaze finding,” Proceedings of British Machine Vision Conference, 1998.
[18] O.M.C. Williams, A. Blake, and R. Cipolla, “Sparse and semi-supervised visual mapping with the s3GP,” Conference on Computer Vision and Pattern Recognition, pp. 230 - 237, Jun. 2006.
[19] C. Colombo and A.D. Bimbo, “Real-time head tracking from the deformation of eye contours using a piecewise affine camera,” in Pattern Recognition Letters, Vol. 20, Issue 7, pp. 721 - 730, July 1999.
[20] G. Wang, E. Sung, and R. Venkateswarlu, “Estimating the eye gaze from one eye,” Computer Vision and Image Understanding, Vol. 98, Issue 1, pp. 83 - 103, Apr. 2005.
[21] D.W. Hansen and A.E.C. Pece, “Eye tracking in the wild,” Computer Vision and Image Understanding, Vol. 98, Issue 1, pp. 182 - 210, Apr. 2005.
[22] R. Newman, Y. Matsumoto, S. Rougeaux, and A. Zelinsky,“Real-time stereo tracking for head pose and gaze estimation,” Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 122 - 128, Mar. 2000.
[23] T. Ishikawa, S. Baker, I. Matthews, and T. Kanade, “Passive driver gaze tracking with active appearance models,” In Proc. 11th World Congress on Intelligent Transportation Systems, Oct. 2004
[24] Magee J.J., Betke M., Gips J., Scott M.R., and Waber B.N., “A human–computer interface using symmetry between eyes to detect gaze direction,” IEEE Transactions on Systems, Man and Cybernetics, Vol. 38, Issue 6, pp. 1248 – 1261, Nov. 2008.
[25] Hansen D.W., Hansen J.P., Nielsen M., Johansen A.S., and Stegmann M.B.,” Eye typing using Markov and active appearance models,” Sixth IEEE Workshop on Applications of Computer Vision, pp. 132 – 136, Dec. 2002.
[26] Parameter values for the HDTV standards for production and international programme exchange, ITU-R Rec. BT.709-5, 2002.
[27] Kittler J.N. and J. Illingworth, “Threshold selection based on a simple image statistic,” in Computer Vision, Graphics, and Image Processing, Vol. 30, Issue 2, pp. 125 – 147, May 1985.
[28] Tsai W.H., “Moment-preserving thresolding: A new approach,” in Computer vision, graphics, and image processing, Vol.29, pp. 377 - 393, 1985.
[29] J.N Kapur, P.K Sahoo and A.K.C Wong, “A new method for gray level picture thresholding using the entropy of histogram,” Computer vision, Graphics, and Image Processing, Vol.29, Issue 3, pp. 273 – 285, Mar. 1985.
[30] D. Zhang and G. Lu, “Evaluation of MPEG-7 shape descriptors against other shape descriptors,” Multimedia Systems, Vol 9, Issue 1, July 2003.