研究生: |
蔡承軒 |
---|---|
論文名稱: |
基於高斯混合模型之課堂舉手辨識研究 Gaussian Mixture of Model based Arm Gesture Recognition Research |
指導教授: | 李忠謀 |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 中文 |
論文頁數: | 45 |
中文關鍵詞: | 高斯混合模型 、人體姿勢辨識 、連續影像相減 |
英文關鍵詞: | Gesture recognition, Gaussian Mixture of Model, Temporal differencing |
論文種類: | 學術論文 |
相關次數: | 點閱:190 下載:5 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
人體姿勢辨識技術是一項熱門的研究議題,在過去利用影像處理來辨識人體姿勢的辨識系統已經發展一段時間,在學術領域或專業應用上使用這類的辨識系統需要龐大的運算量以及昂貴的設備,使得這類的系統無法普及於一般大眾使用。
因此,在這篇論文中本研究經由偵測與辨識學生舉手的動作設計了一套即時
互動或應用的系統。在假設已知上半身範圍的情況下再針對這個範圍採用連續影像差異法 (temporal differencing),利用時間上連續的影像做一對一的像素相減,得到一個移動物件的影像,此影像再透過高斯混合模型 (Gaussian Mixture Model),利用多個高斯函數來描述反覆出現的多種背景值,並透過函數參數值的調適,以適應光線所產生的變化,此目的是為了在複雜的環境中擷取前景 (foreground) 的影像,並使用尺度不變特徵轉換 (Scale-invariant feature transform,SIFT) 擷取特徵,將擷取到的特徵套入支持向量機 (Support Vector Machine,SVM) 對姿勢動作進行辨識。發展此系統的目的在於可以使用方便取得的器材來取代昂貴的設備,使得人體姿勢辨識可以普及於一般大眾所使用。
Human body gesture recognition is one of the top research topic, and it had been developed for a long time. Due to its massive computational complexity and its expensive equipment, these system can’t be used by grassroots. In this paper, we develop a “Gaussian Mixture of Model based Arm Gesture Recognition Research” Real-time system, and using temporal differencing to get a moving object, under the hypothesis of knowing the range of upper body. These image then apply Gaussian Mixture of Model, using multiple Gaussian functions, to describe multiple background status. For adapting illumination effect and extracting foreground in complex environment, we apply parameters adjusting to solve it. We also use SIFT to extract feature and using SVM to classify. We hope this system can let all the people use not expensive and easy-to-get devices to do gesture recognition.
1. D. M. Gavrila, “The visual analysis of human movement: A survey,”Comput. Vis. Image Understanding, vol. 72, pp. 82–98, 1999.
2. T. C. C. Henry, E. G. R. Janapriya, and L. C. deSilva, “An automatic system for multiple human tracking and actions recognition in office environment,” in Proc. ICASSP, 2003, vol. 3, pp. 45–48.
3. J. Krumm, S. Harris, B. Meyers, B. Brumitt, M. Hale, and S. Shafer, Multi-camera multi-person tracking for easy living,” in Proc. 3rd IEEE Int. Workshop Visual Surveillance, Jul. 2000, pp. 3–10.
4. S. Dagtas, W. A. Khatib, A. Ghafoor, and R. L. Kashyap, “Models for motion-based video indexing and retrieval,” IEEE Trans. Image Process., vol. 9, no. 1, pp. 88–101, Jan. 2000.
5. Wren C. R., Azarbayejani A., Darrell T. and Pentland A. P., “Pfinder: Real-Time Tracking of the Human Body”, IEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 19, No. 7, pp. 780-785, July 1997.
6. N. Dalal, and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 886-893, 2005.
7. R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detecting moving objects, ghosts, and shadows in video streams,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1337-1342, 2003.
8. B. Shoushtarian, and H. E. Bez, “A practical adaptive approach for dynamic background subtraction using an invariant colour model and object tracking,” Pattern Recognition Letters, vol. 26, no. 1 pp. 5-26, 2005.
9. C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: Real-time
tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780-785, 1997.
10. Qi Zang,and Reinhard Klette,“Parameter analysis for mixture of Gaussians model,” Communication and Information Technology Research Technical Report 188, 2006.
11. C. Stauffer, and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246-252, 1999.
12. P. KaewTraKulPong, and R. Bowden, “An improved background mixture model for real-time tracking with shadow detection,” Proc. 2nd European Workshop on Advanced Video Based Surveillance Systems, vol. 25, 2001.
13. S. E. Chen, “QuickTime VR – An image based approach to virtual environment
navigation,” Proc. SIGGRAPH 95, pp. 29-38, 1995.
14. Y. Ren, C. S. Chua, and Y. K. Ho, “Statistical background modeling for non-stationary camera,” Pattern Recognition Letters, vol. 24, pp. 183-196, 2003.
15. B.-W. Min, H.-S. Yoon, J. Soh, Y.-M. Yang, and T. Ejima, “ Hand gesture recognition using hidden Markov models,” IEEE International Conference on Systems, Man, and Cybernetics, 'Computational Cybernetics and Simulation'., Florida, USA, pp. 4232-4235, Oct. 1997.
16. J. Lafferty, A. McCallum, and F. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” International Conference on Machine Learning, Williams College, Williamstown, MA, USA, pp. 282-289, June 2001.
17. S. Yuping, and H. Foroosh, “View-invariant action recognition from point triplets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 10, pp. 1898-1905, Oct. 2009.
18. Dailey, D.J., Cathey,F.W.,&Pumrin,S.(2000).An algorithm to estimate mean traffic speed using uncalibrated cameras. IEEE Trans. on Intelligent Transportation Systems, Vol. 1,Issue:2,June,98-107.
19. Q. Zang, R. Klette,"Parameter Analysis of Mixture of Gauissams Model,"CITR Technical Report 188, Auckland University,2006.