研究生: |
鍾允中 Yun-Chung Chung |
---|---|
論文名稱: |
單張影像之特質影像萃取 Characteristic Image Decomposition from a Single Image |
指導教授: |
陳世旺
Chen, Sei-Wang |
學位類別: |
博士 Doctor |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2009 |
畢業學年度: | 97 |
語文別: | 中文 |
論文頁數: | 172 |
中文關鍵詞: | 反光影像處理 、反射影像處理 、陰影處理 、本質影像處理 |
英文關鍵詞: | interference reflection decomposition, highlight reflection sepration, intrinsic image decomposition, shadow extraction |
論文種類: | 學術論文 |
相關次數: | 點閱:193 下載:13 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
對於許多電腦視覺方面的應用而言,從輸入單一張的影像中粹取特徵影像(characteristic images)是非常重要的一個課題,例如陰影的分析、輔助光影的研究、反光消除、反射影像的移除等。舉例來說,對於許多視覺化的智慧型交通運輸應用系統(Intelligent Transportation System)而言,例如交通監控、交通的違規執法、駕駛安全輔助、自動車輛導引等,這些視覺系統若不是架設於戶外,就是裝設於車輛中,它們都遭遇到一共同的困擾,就是光影(包括陰影、反光等現象)常常會干擾甚至降低相關系統的可靠度,使得後續的處理工作增加不少的困難。
然而,想要直接從輸入單一張的影像中粹取特徵影像並非一件容易的事。本論文提出一個可靠的架構從輸入單一張的影像中粹取特徵影像,本架構包括四個主要步驟:邊緣偵測(boundary generation)、分類資訊擷取(information extraction)、邊緣分類(boundary classification)以及特徵影像組成(image composition)。在本論文中共提出應用本架構所解決的三像主要問題,包括反射干擾、反光消除、陰影移除,其特徵影像分別定義如下:反射干擾影像(Interference images)之特徵影像包括:觀測物體影像(Object image)以及反射影像(Reflection image);反光影像(Dichromatic reflection images)之特徵影像包括:強光影像(Specular image)以及物體影像(Diffuse image);本質影像(Intrinsic images)之特徵影像包括:實體影像(Reflectance image)以及光影影像(Illumination image)。
本論文所提出的架構,可從輸入單一張的影像中粹取特徵影像的技術,除了可以提供上述應用系統一個問題的解決方法之外,並可以應用於任何戶外或是與光影有關的系統。
Many computer vision applications have had successful results in limited environmental conditions. However, they often fail when the constraints are loosened as in real world scenes. One of the most common restrictions imposed on vision algorithms is the illumination condition. Techniques that are able to tolerate illumination variations will be useful for general and realistic scenes. In this study, a solution is proposed to get around the undesired effects of illumination such as shadows, highlights and interference reflections. They are called characteristic images and decomposed from the input image.
In view of that edges are one of the keys to understand an image; a computational framework for characteristic images decomposition from a single image based on the edges of the image is developed. The major idea is to classify the edge pixels of the image to target characteristic subsets. The proposed computational framework for characteristic decomposition consists of four major steps: boundary detection, evidence extraction, boundary classification, and characteristic image reconstruction. Given an image, the boundaries of the image are first detected. Evidence is extracted to classify the edge pixels to characteristic subsets. Based on the classification result of edge pixels, an integration process is applied to the classified edges to reconstruct the characteristic images.
Three applications of this computational framework, i.e., interference reflections, highlight reflections, and intrinsic images, are developed in this dissertation. For interference reflections, a technique for separating reflection and object components of a single interference image in an automated manner is presented. The key idea of the proposed method is to classify edges of the interference image into either reflection or object, and to use integration to reconstruct reflection and object images. The method utilizes TV model, blur measure, and region segmentation results as evidence with fuzzy integral technique to classify the edge pixels. Based on the classification results of edge pixels, an integration method is applied to reconstruct the reflection and object components of the input image. For separating specular and diffuse components, Shafer’s dichromatic reflection model is utilized, which assumes that light reflected at a surface point is linearly composed of diffuse and specular reflections. The major idea is to classify the boundary pixels of an image as specular or diffuse. A fuzzy integral process is proposed to classify boundary pixels based on their local evidences, including specular and diffuse estimation information. Based on the classification result of boundary pixels, an integration method is applied to reconstruct the specular and diffuse components of the input image. Unlike previous research, the proposed method has no color segmentation or iterative operations. For intrinsic images, the proposed approach first convolves an input image with a prescribed set of derivative filters. The pixels of the derivative images are next classified as reflectance or illumination according to three measures: chromatic, intensity contrast and edge sharpness, which are calculated in advance for each pixel from the input image. Finally, an integration process is applied to the classified derivative images to obtain the intrinsic images of the original image. The experimental results have demonstrated that the proposed methods can perform characteristic images decomposition from a single image effectively with small misadjustments and rapid convergence.
Bibliography
[AGR 05] A. Agrawal, R. Raskar, S.K. Nayar, and Y. Li, “Removing Photography Artifacts Using Gradient Projection and Flash-Exposure Sampling,” ACM Trans. Graphics, vol. 24, no. 3, pp. 828-835, July 2005.
[AN 08] G. An, J. Wu and Q. Ruan, “Independent Gabor Analysis of Multiscale Total Variation-Based Quotient Image,” IEEE Signal Processing Letters, Vol. 15, pp.186 - 189, 2008, doi: 10.1109/LSP.2007.914937.
[ANG 99] E. Angelopoulou, S. W. Lee, and R. Bajcsy, “Spectral Gradient: A Material Descriptor Invariant to Geometry and Incident Illumination,” The 7th IEEE Int’l Conf. on Computer Vision, pp. 861-867, 1999.
[BAJ 96] R. Bajscy, S.W. Lee, A. Leonardis, “Detection of Diffuse and Specular Interface Reflections by Color Image Segmentation,” Int’l J. Computer Vision, vol.17, No.3, pp.249-272, 1996, doi: 10.1007/BF00128233.
[BAR 78] H. G. Barrow, and J. M. Tenenbaum, “Recovering Intrinsic Scene Characteristics from Images”, Computer Vision Systems, A. R. Hanson and E. M. Riseman (eds.), Academic Press, pp. 3–26, 1978.
[BEL 01] M. Bell and W. T. Freeman, “Learning Local Evidence for Shading and Reflection,” Int’l Conf. on Computer Vision, pp. 670-677, 2001.
[BIE 87] I. Biederman, “Recognition-by-components: a theory of human image understanding,” Psychological Review, 94, pp. 115-147, 1987.
[BIN 03] T.O. Binford and T.S. Levitt, “Evidential reasoning for object recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, Issue 7, pp.837 – 851, July 2003.
[BIT 08] S. Bittante, “Limiting Reflections using a Polarizer Filter”, http://stefanobittante.blogspot.com /2007/06/ polarizer-filter.html, 2008.
[BRO 03] A.M. Bronstein, M.M. Bronstein, M. Zibulevsky, and Y.Y. Zeevi, “Blind Separation of Reflections Using Sparse ICA,” 4th Int’l Symp. on Independent Component Analysis and Blind Signal Separation, Nara, Japan, pp. 227-232, April 2003.
[CAN 86] J. Canny, “A computational approach to Edge detection,” IEEE Trans. PAMI, vol. 8, pp. 250-468, 1986.
[CHA 04] S.L. Chang, L.S. Chen, Y.C. Chung, and S. W. Chen, “Automatic License Plate Recognition ,” IEEE Transactions on Intelligent Transportation Systems, Vol. 5, No. 1, pp. 42-53, Mar. 2004.
[CHA 04-1] T.F. Chan and S. Esedoglu, “Aspects of Total Variation Regularized L1 Function Approximation,” CAM Report 04-07, Univ. of California, Los Angeles, Feb. 2004.
[CHE 06] T. Chen, Y. Wotao, S.Z. Xiang, D. Comaniciu, and T.S. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.28, Issue 9, pp. 1519 – 1524, Sept. 2006, doi: 10.1109/TPAMI.2006.195.
[CHU 04] Y.C. Chung, C.H. Wang, J.M. Wang, S.C. Lin, and S.W. Chen, “Integration of Omnidirectional and Movable Cameras for Indoor Surveillance,” the 17th IPPR Conference on Computer Vision, Graphics and Image Processing, Hualien, Taiwan, E2-5, pp. 1-7, Aug 2004.
[CHU 04-1] Y.C. Chung, J.M. Wang, R.R. Bailey, S.W. Chen, and S.L. Chang, “A Non-Parametric Blur Measure Based on Edge Analysis for Image Processing Applications,” IEEE Conf. on Cybernetics and Intelligent Systems, Singapore, pp. 356-360, Dec 2004, doi: 10.1109/ICCIS.2004.1460440.
[CHU 06] Y.C. Chung, S.L. Chang, S. Cherng, and S.W. Chen, “The invariance properties of chromatic characteristics,” Lecture Notes in Computer Science (LNCS): Advances in Image and Video Technology, Springer, Vol. 4319, pp. 128-137, Dec. 2006, doi: 10.1007/11949534_13.
[CHU 06-1] Y.C. Chung, S.L. Chang, J.M. Wang, and S.W. Chen, “An Edge Analysis Based Blur Measure for Image Processing Applications,” J. of Taiwan Normal University: Mathematics, Science & Technology, Vol.51(1), pp. 21-31, Apr. 2006.
[CHU 06-2] Y.C. Chung, J.M. Wang, C.Y. Fang, and S.W. Chen, “Intrinsic image extraction from a single image,” IEEE Int’l Conf. on Signal and Image Processing, Macmillan Advanced Research, Karnataka, pp. 760-765, India, Dec 2006.
[CRI 02] A. Criminisi, S.B. Kang, R. Swaminathan, S. Szeliski, and P. Anandan, “Extracting Layers and Analyzing Their Specular Properties Using Epipolar Plane Image Analysis,” Technical Report MSR-TR-2002-19, Microsoft Research, 2002.
[DRE 03] M. S. Drew, G. D. Finlayson, and S. D. Hordley, “Recovery of Chromaticity Image Free from Shadows via Illumination Invariance,” IEEE Workshop on Color and Photometric Methods in Computer Vision, pp. 32-39, 2003.
[DUB 82] D. Dubois and H. Prade, “Integration of Fuzzy Mappings: Part 2; Integration of Fuzzy Intervals: Part 3,” Fuzzy Sets Systems 8, pp. 105-116, 225-233, 1982.
[ENC 07] Enchanted Learning, Mona Lisa by Leonardo da Vinci, http://www.enchantedlearning.com 2007.
[FAN 03] C. Y. Fang, S. W. Chen, and C. S. Fuh, “Automatic Change Detection of Driving Environments in a Vision-Based Driver Assistance System,” IEEE Trans. on Neural Networks, vol. 14, no. 3, pp. 646-657, 2003.
[FAR 99] H. Farid and E. H. Adelson, “Separating Reflections from Images by Use of Independent Component Analysis,” Journal of the Optical Society of America, vol. 16, no. 9, pp. 2136-2145, 1999.
[FIN 01] G.. D. Finlayson and S. D. Hordley, “Color Constancy at a Pixel,” Journal of the Optical Society of America, 18(2), pp. 253-264, 2001.
[FIN 04] G. D. Finlayson, M. S. Drew, and C. Lu, “Intrinsic Images by Entropy Minimization,” European Conference on Computer Vision, pp. 582-595, Prague, 2004.
[FIN 06] G. D. Finlayson, S. D. Hordley, L. Cheng and M. S. Drew, “On the Removal of Shadows from Images,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 1, pp. 59 - 68, 2006.
[FOR 03] D. A. Forsyth and J. Ponce, Computer Vision, A Modern Approach, Chapter 1, Prentice Hall, New Jersey, pp. 17-18, 2003.
[FRE 91] W. T. Freeman and E. H. Adelson, “The Design and Use of Steerable Filters,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 13, no. 9, pp. 891–906, 1991, doi: 10.1109/34.93808.
[FUN 92] B. V. Funt, M. S. Drew, and M, Brockington, “Recovering Shading from Color Images,” The 2nd European Conf. on Computer Vision, pp. 124-132, 1992.
[GEO 00] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From Few to Mant: Generative Models for Recognition under Variable Pose and Illumination,” IEEE Int’l Conf. on Automatic Face and Gesture Recognition, pp. 277-284, 2000.
[GEU 01] J.M. Geusebroek, R. Boomgaard, A.W.M. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 12, pp.1338-1350, 2001, doi: 10.1109/34.977559.
[GEU 02] J. M. Geusebroek, T. Gevers, and A. W. M. Smeulders, “The Kubelka-Munk Theory for Color Image Invariant Properties,” The 1st Conf. on Color in Graphics, Imaging, and Vision, pp. 463-467, 2002.
[GON 02] R.C. Gonzalez and R.E. Woods, Digital Image Processing, Second Edition, Prentice Hall, 2002.
[HAN 04] M. Han, A. Sethi, W. Hua, and Y.H. Gong, “A detection-based multiple object tracking method,” International Conference on Image Processing, Vol. 5, 24-27 pp. 3065 - 3068, Oct. 2004.
[HAV 07] L. Havasi, Z. Szlvik, and T. Szirnyi, ”Detection of Gait Characteristics for Scene Registration in Video Surveillance System,” IEEE Transactions on Image Processing, Vol. 16, Issue 2, pp.503 – 510, Feb. 2007.
[HEA 96] G. Healey and A. Jain, “Retrieving Multispectral Satellite Images Using Physics-Based Invariant Representations,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 18, pp.842-848, 1996.
[HOF 84] D.D. Hoffman and W.A. Richards, “Parts of Recognition,” Cognition, v. 18, pp. 65-96, 1984.
[IRA 92] M. Irani and S. Peleg, “Image Sequence Enhancement Using Multiple Motions Analysis,” IEEE Proc. Conf. Computer Vision and Pattern Recognition, pp. 216-221, June 1992, doi: 10.1109/CVPR.1992.223272.
[JOB 97] D.J. Jobson, Z. Rahman, and G.A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. on Image Processing, Vol.6, Issue: 7, pp. 965-976, July 1997, doi: 10.1109/83.597272.
[JOB 97-1] D.J. Jobson, Z. Rahman, and G.A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. on Image Processing, Vol.6, Issue:3, pp. 451- 462, March 1997, doi: 10.1109/83.557356.
[KAM 00] S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi, “Traffic Monitoring and Accident Detection at Intersections,” IEEE Trans. on Intelligent Transportation Systems, vol.1, no. 2, pp.108–118, 2000.
[KEL 94] J. M. Keller, R. Krishnapuram, “Fuzzy decision models in computer vision,” in: R. R. Yager and L. A. Zadeh, Van Nostrand Reinhold (Eds.), Fuzzy Sets, Neural Networks, and Soft Computing, New York, pp. 213-232, 1994.
[KLI 90] G.J. Klinker, S.A. Shafer, and T. Kanade, “The Measurement of Highlights in Color Images,” Int’l J. Computer Vision, vol. 2, pp. 7- 32, 1990.
[LAN 71] E.H. Land and J.J. McCann, “Lightness and Retinex Theory,” J. Opt. Soc. Am. 61, pp.1-11, 1971.
[LEE 92] S.W. Lee and R. Bajcsy, “Detection of Specularity Using Color and Multiple Views,” Image and Vision Computing, vol. 10, pp. 643-653, 1992.
[LEH 01] T.M. Lehmann and C. Palm, “Color line search for illuminant estimation in real-world scene,” Journal of the Optical Society of America A, 18(11), pp. 2679-2691, 2001.
[LEU 99] T. Leung and J. Malik, “Recognizing Surfaces Using Three-Dimensional Texons,” The 7th IEEE Int’l Conf. on Computer Vision, pp. 1010-1017, 1999.
[LEV 04] A. Levin, A. Zomet and Y. Weiss, “Separating Reflections from a Single Image Using Local Features,” IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Washington DC, June 2004, doi: 10.1109/CVPR.2004.1315047.
[LEV 04-1] A. Levin and Y. Weiss, “User Assisted Separation of Reflections from a Single Image Using a Sparsity Prior,” Proc. European Conf. Computer Vision, May 2004.
[LEV 07] A. Levin and Y. Weiss, “User Assisted Separation of Reflections from a Single Image Using a Sparsity Prior,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 29, Issue 9, pp.1647 – 1654, Sept. 2007, doi: 10.1109/TPAMI.2007.1106.
[LIN 01] S. Lin and H.Y. Shum, “Separation of Diffuse and Specular Reflection in Color Images,” Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR ’01), 2001.
[LIN 02] S. Lin, Y. Li, S.B. Kang, X. Tong, and H.Y. Shum, “Diffuse-Specular Separation and Depth Recovery from Image Sequences,” Proc. European Conf. Computer Vision (ECCV ’02), pp. 210-224, 2002.
[LOW 85] D.G. Lowe, Perceptual Organization and Visual Recognition, Hingham, MA, Kluwer Academic Publishers, 1985.
[MAL 06] S.P. Mallick, T.E. Zickler, P.N. Belhumeur, and D.J. Kriegman, “Specularity Removal in Images and Videos: A PDE approach,” European Conference on Computer Vision, Vol. I, pp. 550-563, Graz, Austria, May 2006.
[MAR 02] R. F. K. Martin, O. Masoud, and N. Papanikolopoulos, “Using Intrinsic Images for Shadow Handling,” The IEEE 5th Int’l Conf. on Intelligent Transportation Systems, pp. 1526-155, 2002.
[MAT 03] Y. Matsushita, K. Nishino, K. Ikeuchi, and M. Sakauchi, “Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance”, IEEE Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 3-10, 2003.
[MOR 07] M. R. Morelande, C. M. Kreucher, and K. Kastella, “A Bayesian Approach to Multiple Target Detection and Tracking,” IEEE Transactions on Signal Processing, Volume 55, Issue 5, Part 1, pp.1589 - 1604, May 2007.
[NAR 00] S. G. Narasimhan and S. K. Nayar, “Chromatic Framework for Vision in Bad Weather,” The IEEE Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 598-605, 2000.
[NAY 97] S.K. Nayar, X.S. Fang, and T. Boult, “Separation of Reflection Components Using Color and Polarization,” Int. J. of Computer Vision, Vol. 21, Issue 3, pp.163-186, Feb. 1997.
[PAN 07] Z. Pan and C.-W. Ngo, “Moving-Object Detection, Association, and Selection in Home Videos,” IEEE Transactions on Multimedia, Vol. 9, Issue 2, pp.268 - 279, Feb. 2007.
[PAR 03] J.B. Park and A.C. Kak, “A Truncated Least Squares Approach to the Detection of Specular Highlights in Color Images,” Proceedings of the IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 2003.
[PRO 08] E. Provenzi, C. Gatta, M. Fierro, and A. Rizzi, “A Spatially Variant White-Patch and Gray-World Method for Color Image Enhancement Driven by Local Contrast,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 30, no. 10, pp. 1757-1770, Oct. 2008.
[PUR 07] Purdue RVL Specularity Image Database, the Robot Vision Laboratory at Purdue University, http://rvl1.ecn.purdue.edu/RVL/specularity_database/, 2007.
[ROY 03] S.D. Roy, S. Chaudhury, and S. Banerjee, “Aspect graph construction with noisy feature detectors,” IEEE Transactions on Systems, Man and Cybernetics, Part B, Vol. 33, Issue 2, pp.340 – 351, April 2003.
[SAL 07] M. Saleem, I. Touqir, and A. M. Siddiqui, “Novel Edge Detection,” Fourth International Conference on Information Technology, pp.175 – 180, April 2007.
[SAR 04] B. Sarel and M. Irani, “Separating Transparent Layers through Layer Information Exchange,” Proc. European Conf. Computer Vision, May 2004.
[SAR 05] B. Sarel and M. Irani, “Separating Transparent Layers of Repetitive Dynamic Behaviors,” IEEE Proc. Int’l Conf. Computer Vision, 2005, doi: 10.1109/ICCV.2005.216.
[SAT 94] Y. Sato and K. Ikeuchi, “Temporal-Color Space Analysis of Reflection,” J. Optics Soc. Am. A, vol. 11, 1994.
[SHA 03] S. G. Shan, W. Gao, B. Cao, and D. Zhao, “Illumination normalization for robust face recognition against varying lighting conditions”, IEEE Intel. Workshop on Analysis and Modeling of Faces and Gestures, pp. 157 -164, 2003.
[SHA 85] S. A. Shafer, “Using Color to Separate Reflection Components,” Color Resolution Applications, vol. 10, no. 4, pp. 210-218, 1985.
[SHE 99] Y. Shechner, J. Shamir, and N. Kiryati, “Polarization-Based Decorrelation of Transparent Layers: The Inclination Angle of an Invisible Surface,” IEEE Proc. Int’l Conf. Computer Vision, pp. 814-819, 1999, doi: 10.1109/ICCV.1999.790305.
[SIM 97] E. P. Simoncelli, “Statistical models for images: compression, restoration and synthesis,” The 31th Asilomar Conf. on Signals, Systems and Computers, vol. 1, pp. 673-678, 1997.
[SIN 93] P. Sinha and E. H. Adelson, “Recovering Reflectance in a World of Painted Polyhedra,” The 14th Int’l Conf. on Computer Vision, pp. 156-163, 1993.
[SUG 77] M. Sugeno, “Fuzzy measure and fuzzy integrals: a survey,” Fuzzy Automatic and Decision Processes, North Holland, Amsterdam, pp. 89-102, 1977.
[SZE 00] R. Szeliski, S. Avidan, and P. Anandan, “Layer Extraction from Multiple Images Containing Reflections and Transparency,” IEEE Conf. on Computer Vision and Pattern Recognition, pp. 246-253, 2000, doi: 10.1109/CVPR.2000.855826.
[TAN 03] P. Tan, S. Lin, L. Quan, and H.-Y. Shum, “Highlight removal by illumination-constrained inpainting,” IEEE Int’l. Conf. on Computer Vision, pp. 164-169, Nice, France, 2003.
[TAN 04] R.T. Tan, K. Nishino and K. Ikeuchi, “Color Constancy through Inverse-Intensity Chromaticity Space,” J Opt Soc Am A Opt Image Sci Vis., 21(3):321-34, 2004.
[TAN 05] R.T. Tan and K. Ikeuchi, “Separating reflection components of textured surfaces using a single image,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 27, Issue 2, pp.178 - 193, Feb. 2005.
[TAP 02] M. F. Tappen, W. T. Freeman, and E. H. Adelson, “Recovering Intrinsic Images from a Single Image”, Proc. of Neural Information Processing Systems, 2002.
[TER 99] K. Terada, D. Yoshida, S. Oe, and J. Yamaguchi, “A counting method of the number of passing people using a stereo camera,” The 25th Annual Conference on IECON, vol. 3, pp.1318–1323, 1999.
[TSI 03] Y. Tsin, S.B. Kang, and R. Szeliski, “Stereo Matching with Reflections and Translucency,” IEEE Proc. Conf. Computer Vision and Pattern Recognition, pp. 702- 709, 2003.
[WAN 92] Z. Wang and G.J. Klir, Fuzzy Measure Theory, Plenum Press, New York, 1992.
[WAN 04] J.M. Wang, C.T. Tsai, Y.C. Chung, S.L. Chang, and S.W. Chen, "Vehicle tracking system based on multiple omni-directional camera," the 17th IPPR Conference on Computer Vision, Graphics and Image Processing, Hualien, Taiwan, A3-2, pp. 1-7, Aug 2004.
[WAN 06] J. M. Wang, Y.C. Chung, S. Cherng, and S. W. Chen, “Vision-Based Traffic Measurement System,” IPPR Image and Pattern Recognition, Vol.12, No.2, pp. 63-88, Jun. 2006.
[WEI 01] Y. Weiss, “Deriving Intrinsic Images from Image Sequences,” IEEE Int’l. Conf. on Computer Vision, Vol.2, pp.68-75, 2001, doi: 10.1109/ICCV.2001.937606.
[WOL 91] L.B. Wolff and T. Boult, “Constraining Object Features Using Polarization Reflectance Model,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, no. 7, pp. 635-657, July 1991.
[WU 07] J. Wu, Z. P. Yin, and Y. Xiong, “The Fast Multilevel Fuzzy Edge Detection of Blurry Images,” IEEE Signal Processing Letters, Vol. 14, Issue 5, pp.344 – 347, May 2007.
[YUI 99] A. L. Yuille, D. Snow, P. Epstein, and P. N. Belhumeur, “Determining Generative Models of Objects under Varying Illumination: Shape and Albedo from Multiple Images Using SVD and Integrability,” Int’l Journal of Computer Vision, vol. 35, no. 3, pp. 203-222, 1999.
[ZIB 01] M. Zibulevsky, P. Kisilev, Y. Zeevi, B. Pearlmutter, “Blind Source Separation via Multinode Sparse Representation,” in: T. Dietterich, S. Becker, and Z. Ghahramani, (eds.) Advances in Neural Information Processing Systems, Vol. 14, The MIT Press, Boston, 2001.
[ZIM 91] H.J. Zimmermann, Fuzzy Set Theory and Its Applications, 2nd Ed., Kluwer Academic, Boston, 1991.