研究生: |
黃冠人 Huang, Kuan-Jen |
---|---|
論文名稱: |
以眼角定位為基礎之眼球模型與凝視點估計 Eyeball Model Construction Anchored by Canthus and Gaze Estimation |
指導教授: |
高文忠
Kao, Wen-Chung |
學位類別: |
碩士 Master |
系所名稱: |
電機工程學系 Department of Electrical Engineering |
論文出版年: | 2020 |
畢業學年度: | 108 |
語文別: | 中文 |
論文頁數: | 52 |
中文關鍵詞: | 眼動儀 、可見光 、眼球模型 、頭動補償 |
英文關鍵詞: | gaze tracker, visible light, eye model, head movement compensation |
DOI URL: | http://doi.org/10.6345/NTNU202000342 |
論文種類: | 學術論文 |
相關次數: | 點閱:204 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
可見光眼動儀無須額外的紅外光源,並且提供比一般紅外光眼動儀系統更好的使用者體驗。近年來,可見光眼動儀系統於人機系統互動的領域中變得更加重要,一般支援頭部移動的紅外光眼動儀系統已可在市場上看到,可見光系統卻還未有相關產品。可見光眼動儀系統在讓使用者自由地移動頭部的開發中仍留有許多進步空間。這是因為其系統精準度在頭部移動的情況下會降低。然而,頭動補償的問題並不能直接被簡化為眼球定位方面的問題,由於眼球定位可能不夠準確,眼球模型亦跟著建構的不準確,這導致了後續的凝視點估計結果無法接受。在本研究中,我們探討以內眼角為錨點的立體眼球模型,並分析了眼球中心和內眼角之間的相對關係,利用這個數學關係,可以改善眼球模型的建構、虹膜匹配和頭動補償等階段的精準度。除此之外,於頭動補償的階段,本研究可以準確地估計相機,屏幕和人眼之間的相對位置。實驗結果顯示本研究提出的方法能夠容忍眼球定位些微不準,並且提高凝視點估計精準度。
The visible-spectrum gaze tracker (VSGT), which is designed without the extra Infra-ray (IR) illumination, has provided a more superior user experience than the traditional IR-based gaze tracker. It has become an important human machine interface, while it remains challenging for allowing the users to move their heads. That is, the system performance appears significantly inferior due to head movement. However, the head movement compensation cannot be simply formulated as an eye detection problem. The minor error of the eye detection algorithm leads to an unacceptable result for the gaze estimation due to the fact that an incorrect eyeball model will be adopted. In this thesis, we further explore the 3-D eyeball model anchored by the inner eye corner point. The relative location between the eyeball center and the inner eye corner is analyzed. This feature is used to improve the eyeball model construction, the limbus circle matching, and the head motion compensation. In addition, the proposed approach can accurately estimate the relative positions/poses among the camera, the screen, and the human eyes. The experimental results show the proposed approach tolerates a wide range of the estimation error for the eye detection. Thus, the gaze point estimation performance could be remarkably improved.
[1] D. Hansen and Q. Ji, “In the eye of the beholder: a survey of models for eyes and gaze,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 3, pp. 478–500, 2010.
[2] K. Takemura, Y. Kohashi, T. Suenaga, J. Takamatsu, and T. Ogasawara, “Estimating 3-D point-of-regard and visualizing gaze trajectories under natural head movements,” in Proc. Symp. ETRA 10, 2010.
[3] A. Kar and P. Corcoran, “A review and analysis of eye-gaze estimation systems, algorithms and performance evaluation methods in consumer platforms,” IEEE Access, vol. 5, pp. 16495–16519, 2017.
[4] S. Shih and J. Liu, “A novel approach to 3-D gaze tracking using stereo cameras,” IEEE Trans. Systems, Man and Cybernetics, Part B (Cybernetics), vol. 34, no. 1, pp. 234–245, 2004.
[5] Y.I. Abdel-Aziz and H.M. Karara, “Direct linear transformation into object space coordinates in close-range photogrammetry,” in Proc. Symp. Close-Range Photogrammetry, pp. 1-18, Jan. 1971.
[6] C. Hennessey and P. Lawrence, “Noncontact binocular eye-gaze tracking for point-of-gaze estimation in three dimensions,” IEEE Trans. Biomedical Engineering, vol. 56, no. 3, pp. 790–799, 2009.
[7] J. Li and S. Li, “Gaze estimation from color image based on the eye model with known head pose,” IEEE Trans. Human-Machine Systems, vol. 46, no. 3, pp. 414–423, 2016.
[8] J. Huang, Q. Cai, Z. Liu, N. Ahuja, and Z. Zhang, “Towards accurate and robust cross-ratio based gaze trackers through learning from simulation,” in Proc. Symp. ETRA 14, 2014.
[9] D. H. Yoo and M. J. Chung, “A novel non-intrusive eye gaze estimation using cross-ratio under large head motion,” Computer Vision and Image Understanding, vol. 98, no. 1, pp. 25–51, 2005.
[10] D. W. Hansen, J. S. Agustin, and A. Villanueva, “Homography normalization for robust gaze estimation in uncalibrated setups,” in Proc. Symp. ETRA 10, 2010.
[11] F. L. Coutinho and C. H. Morimoto, “Free head motion eye gaze tracking using a single camera and multiple light sources,” in Proc. Symp. Computer Graphics and Image Processing, 2006.
[12] F. L. Coutinho and C. H. Morimoto, “Augmenting the robustness of cross-ratio gaze tracking methods to head movement,” in Proc. Symp. ETRA 12, 2012.
[13] I. Bacivarov, M. Ionita, and P. Corcoran, “Statistical models of appearance for eye tracking and eye-blink detection and measurement,” IEEE Trans. Consumer Electronics, vol. 54, no. 3, pp. 1312–1320, 2008.
[14] P. Koutras and P. Maragos, “Estimation of eye gaze direction angles based on active appearance models,” in Proc. IEEE Int. Conf. Image Processing, 2015.
[15] Y. Wu, C. Yeh, W. Hung, and C. Tang, “Gaze direction estimation using support vector machine with active appearance model,” Multimedia Tools and Applications, vol. 70, no. 3, pp. 2037–2062, Jul. 2012.
[16] H. Lu, C. Wang, and Y. Chen, “Gaze tracking by binocular vision and LBP features,” in Proc. 19th Int. Conf. Pattern Recogn., 2008.
[17] A. Meyer, M. Böhme, T. Martinetz, and E. Barth, “A single-camera remote eye tracker,” Perception and Interactive Technologies Lecture Notes in Computer Science, pp. 208–211, 2006.
[18] E. Guestrin and M. Eizenman, Erratum to “General Theory of Remote Gaze Estimation Using the Pupil Center and Corneal Reflections” IEEE Trans. Biomedical Engineering, vol. 53, no. 8, pp. 1728–1728, 2006.
[19] C. Hennessey, B. Noureddin, and P. Lawrence, “A single camera eye-gaze tracking system with free head motion,” in Proc. Symp. ETRA, 2006.
[20] T. Ohno and N. Mukawa, “A free-head, simple calibration, gaze tracking system that enables gaze-based interaction,” in Proc. ETRA, 2004.
[21] D. Beymer and M. Flickner, “Eye gaze tracking using an active stereo head,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recogn., vol. 2. Jun. 2003, pp. II-451–II-458
[22] K. Wang and Q. Ji, “Real time eye gaze tracking with Kinect,” in Proc. 23rd Int. Conf. Pattern Recogn., 2016.
[23] L. Jianfeng and L. Shigang, “Eye-model-based gaze estimation by rgb-d camera,” in Proc. IEEE Conf. Comput. Vis. Pattern Recogn. Workshops, Columbus, OH, USA, Jun. 2014, pp. 606–610.
[24] P. Huber, Z.-H. Feng, W. Christmas, J. Kittler, and M. Ratsch, “Fitting 3d morphable face models using local features,” in Proc. IEEE Int. Conf. Image Processing, 2015.
[25] X. Xiong and F. D. L. Torre, “Supervised descent method and its applications to face alignment,” in Proc. IEEE Conf. Comput. Vis. Pattern Recogn., 2013.
[26] C. Lee, " High precision canthus alignment for visible-spectrum gaze tracking system with head movement compensation," M.S. thesis, National Taiwan Normal University, Taipei, 2019.
[27] W. Kao and Y. Chiu, “Eyeball model construction and matching for visible-spectrum gaze tracking systems,” in Proc. IEEE 8th Int. Conf. Consumer Electronics - Berlin, 2018.
[28] “Camera Calibration and 3D Reconstruction,” Camera Calibration and 3D Reconstruction - OpenCV 2.4.13.7 documentation. [Online]. Available: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=calibratecamera. [Accessed: 10-Feb-2020].