簡易檢索 / 詳目顯示

研究生: 陳佳珮
Peggy Chen
論文名稱: 高速公路上鄰近車輛之危險動向偵測
Critical Motion Detection of Nearby Vehicles on Freeways
指導教授: 陳世旺
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2003
畢業學年度: 91
語文別: 中文
中文關鍵詞: 動向偵測車輛偵測模糊整合
英文關鍵詞: Motion Detection, Vehicle Detection, Fuzzy Integral
論文種類: 學術論文
相關次數: 點閱:281下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本篇主要為應用影像技術偵測在高速公路上行駛時鄰近我車之車輛的危險動向。系統主要分為三個部份:感覺分析器(sensory analyzer)、知覺分析器(perceptual analyzer)與概念分析器(conceptual analyzer)。感覺分析器可找出影像中移動的物體,主要針對為鄰近我車之車輛;知覺分析器則是利用—STA (spatial-temporal attention)類神經網路模組來記錄鄰近車輛之移動方向,其結果稱為注意力圖像(attention maps),隨後我們將此圖分割為五個視窗,以便偵測不同位置的障礙物,對於每個視窗我們計算其偏態(skewness)特徵值,作為分類時的輸入值;概念分析器則是根據各個視窗計算出來的偏態值利用CART (configurable adaptive resonance theory)類神經網路來做分類。最後在決策(decision making)模組中應用模糊理論整合各個CART類神經網路的結果以輸出最後分類的結果。在實驗結果中,我們提出數個例子以驗證我們的方法。

    We propose a system which can detect the motion behaviors of the nearby vehicles on the expressway. The system consists of three components: sensory analyzer, perceptual analyzer and conceptual analyzer. The sensory analyzer detects moving objects, especially the nearby vehicles. The perceptual analyzer records the motion direction of nearby vehicles in the attention map of a STA(spatial-temporal attention) neural network. We divide the attention maps into five overlapping windows from each of which the feature of skewness is computed. Each feature is fed into a CART (configurable adaptive resonance theory) for motion classification. Using the fuzzy integral, individual decisions made by separate CART neural networks are combined and the final decision is attained. A number of experimental results are presented, which revealed the feasibility of the proposed approach.

    表目錄……………………………………………………………………ii 圖目錄…………………………………………………………………iii 第一章 緒論……………………………………………………… 1-1 1.1研究背景與目的……………………………………......... 1-1 1.2相關研究……………………………...…………………………1-4 1.3論文的架構………………………………………………………1-8 第二章 鄰近車輛之危險動向偵測系統………………………… 2-1 2.1系統架構………………………..………...………………… 2-1 2.2感覺分析器……………..…………………...………………2-3 2.3知覺分析器………………..…………………...……………2-5 2.4概念分析器………………….………………………………2-5 第三章 感覺分析器…….……………………………………3-1 第四章 知覺分析器…………………….……………………… 4-1 4.1 STA(spatial-temporal attention)類神經網路模組...4-1 4.2類別特徵擷取模組………………………………………....4-6 4.2.1分割視窗…………………………...……………………… 4-8 4.2.2特徵向量擷取……………………...…….……………… 4-9 第五章 概念分析器…………………………………………………………........5-1 5.1 CART(configurable adaptive resonance theory)類神經網路模組…… 5-1 5.2 決策(decision making)模組…………………………………….…...…. 5-4 5.2.1 信心函數(confidence function)………….…...………………....… 5-6 5.2.2 模糊計量函數(fuzzy measure function)…..…………………..…... 5-7 5.2.3 模糊整合(fuzzy integral)………………………...………………... 5-7 第六章 實驗…………………………………………………………………...… 6-1 6.1 單一狀況之實驗結果……………...……………….………………..……. 6-1 6.2 複雜狀況之實驗結果……………………...………….…………….…… 6-15 第七章 結論……………………………………………………………………... 7-1 參考文獻…………………………………………………………………………... A-1 表 目 錄 表5-1、圖4-11與圖4-12之偏態值轉換………………………………………. 5-3 圖 目 錄 圖1-1、高速公路上之靜態障礙物………………...…..……………………...… 1-2 圖1-2、高速公路上之動態障礙物……………...…..……………….………….. 1-2 圖1-3、在高速公路上可能遇到的狀況……...…………..…………...………… 1-4 圖1-4、影像處理技術在偵測車輛方面會遇到的部份困難……….…………... 1-6 圖2-1、高速公路上鄰近車輛之危險動向偵測系統之流程圖...………………. 2-2 圖2-2、感覺分析器的特徵擷取流程圖……………………...…………………. 2-4 圖3-1、感覺分析器的特徵擷取流程圖………………….…..…………………. 3-3 圖3-2、車燈特徵的擷取……………………..………...………………………... 3-4 圖3-3、感覺分析器中特徵擷取之範例…………...………………….………… 3-5 圖4-1、STA類神經網路架構圖………..……...………………………………... 4-2 圖4-2、輸入層與輸出層的連結權重……..…………………………………..… 4-2 圖4-3、墨西哥帽(Mexican-hat)函數…….……………………………...…… 4-3 圖4-4、輸入刺激源後,輸出層神經元激發值的活動情形………….………... 4-4 圖4-5、知覺分析器中STA類神經網路模組之輸出範例………….………….. 4-6 圖4-6、單一鄰近車輛與本身車輛動向之注意力圖像……….………………... 4-7 圖4-7、五個視窗之位置分佈圖……………..…………….……………..……... 4-8 圖4-8、「本車向右切換車道結束」注意力圖像之視窗分割情形…………...... 4-9 圖4-9、注意力圖像及其橫向與縱向切割圖…………………………………... 4-10 圖4-10、圖4-9(b)各區塊之偏態值…………………………………………….. 4-10 圖4-11、圖4-9(c)各區塊之偏態值……………………………….……………. 4-11 圖4-12、對稱的注意力圖像及其橫向與縱向切割之偏態值…………………. 4-12 圖5-1、特徵向量轉換前後之折線圖……………….…………….…..….……... 5-4 圖5-2、整合個別CART類神經網路的分類結果之流程圖…...………………. 5-5 圖6-1、「左方車道車輛加速」之路況與注意力圖像………………………….. 6-5 圖6-2、「左方車道車輛加速」的路況在各張影像中五個視窗的模糊整合值.. 6-5 圖6-3、「右方車道車輛加速」之路況與注意力圖像………………………….. 6-6 圖6-4、「右方車道車輛加速」的路況在各張影像中五個視窗的模糊整合值.. 6-6 圖6-5、「右方車道車輛減速」之路況與注意力圖像………………………….. 6-7 圖6-6、「右方車道車輛減速」的路況在各張影像中五個視窗的模糊整合值.. 6-7 圖6-7、「左方車道車輛減速」之路況與注意力圖像………………………….. 6-8 圖6-8、「左方車道車輛減速」的路況在各張影像中五個視窗的模糊整合值.. 6-8 圖6-9、「右方車道之車輛切換至我車道前方」之路況與注意力圖像……….. 6-9 圖6-10、「右方車道之車輛切換至我車道前方」的路況在各張影像中五個視窗的模糊整合值………………………………………………………………. 6-9 圖6-11、「左方車道之車輛切換至我車道前方」之路況與注意力圖像……... 6-10 圖6-12、「左方車道之車輛切換至我車道前方」的路況在各張影像中五個視窗的模糊整合值……………………………………………………………... 6-10 圖6-13、「前方車輛切換至左方車道」之路況與注意力圖像………………... 6-11 圖6-14、「前方車輛切換至左方車道」的路況在各張影像中五個視窗的模糊整合值………………………………………………………………………... 6-11 圖6-15、「前方車輛切換至右方車道」之路況與注意力圖像………………... 6-12 圖6-16、「前方車輛切換至右方車道」的路況在各張影像中五個視窗的模糊整合值………………………………………………………………………... 6-12 圖6-17、「前方車輛接近本車」之路況與注意力圖像………………………... 6-13 圖6-18、「前方車輛接近本車」的路況在各張影像中五個視窗的模糊整合值………………………………………………………………………... 6-13 圖6-19、「隧道中-右方車輛減速」之路況與注意力圖像……………………. 6-14 圖6-20、「隧道中-右方車輛減速」的路況在各張影像中五個視窗的模糊整合值………………………………………………………………………... 6-14 圖6-21、模擬影像合成的方法…………………………………………………. 6-15 圖6-22、「左右車道車輛同時減速」之合成注意力圖像……………………... 6-18 圖6-23、「左右車道車輛同時減速」的路況在各張影像中五個視窗的模糊整合值………………………………………………………………………... 6-18 圖6-24、「左方車道車輛先加速,右方車道車輛後加速」之合成注意力圖像………………………………………………………………………... 6-19 圖6-25、「左方車道車輛先加速,右方車道車輛後加速」的路況在各張影像中五個視窗的模糊整合值…………………………………………………... 6-19 圖6-26、「本車切換至左方車道+左方車輛切換至我車道」之合成注意力圖像………………………………………………………………………... 6-20 圖6-27、「本車切換至左方車道+左方車輛切換至我車道」的路況在各張影像中五個視窗的模糊整合值………………………………………………... 6-20 圖6-28、「前車切換至右方車道+左方車道車輛加速」之合成注意力圖像… 6-21 圖6-29、「前車切換至右方車道+左方車道車輛加速」的路況在各張影像中五個視窗的模糊整合值……………………………………………………... 6-21

    [1] T. Bachmann and S. Bujnoch, “Connected Drive -- Driver Assistance Systems of the Future,” http://195.30.248.73/mercatorpark/pdf/connectedDrive.pdf, 2001.
    [Bat97] P. H. Batavia, D. E. Pomerleau, and C. E. Thorpe, “Overtaking Vehicle Detection Using Implicit Optical Flow,” Proc. IEEE Intelligent Transportation System, pp. 729-734, 1997.
    [Ber00] M. Bertozzi, A. Broggi, A. Fascioli, and S. Niclele, “Stereo Vision-Based Vehicle Detection,” Proc. IEEE Intelligent Vehicle Symposium, Detroit, pp. 39-44, 2000.
    [Bet96] M. Betke, E. Haritaglu and L. S. Davis, “Multiple Vehicle Detection and Tracking in Hard Real Time,” Proc. IEEE Intelligent Vehicles Symposium, pp. 351-356, 1996.
    [Cha93] N. M. Charkari, H. Mori, “A New Approach for Real Time Moving Vehicle Detection,” Intelligent Robots and Systems, Vol. 1, pp. 273-278, 1993.
    [Den01] S. Denasi and G. Quaglia, “Obstacle Detection Using a Deformable Model of Vehicles,” Proc. IEEE Intelligent Vehicles Symposium, Tokyo, pp. 145-150, 2001.
    [Fan02] C. Y. Fang, C. S. Fuh, and S. W. Chen, “Driving Environmental Change Detection Subsystem in a Vision-Based Driver Assistance System,” Proc. IEEE International Joint Conference on Neural Networks, Honolulu, Hawaii, Vol. 1, pp. 246-251, 2002.
    [Fan03] C. Y. Fang, S. W. Chen, and C. S. Fuh, “Automatic Change Detection of Driving Environments in a Vision-Based Driver Assistance System,” to appear in the IEEE Transactions on Neural Networks, 2003.
    [Fra92] J. Frazier, R. Nevatia, “Detecting Moving Objects from a Moving Platform,” Proc. IEEE Robotics and Automation, Vol. 2, pp. 1627-1633, 1992.
    [Fun01] G. S. K. Fung, N. H. C. Yung, and G. K. H. Pang, “Vehicle Shape Approximation from Motion for Visual Traffic Surveillance,” Proc. IEEE Intelligent Transportation System, Oakland, USA, pp. 610-615, 2001.
    [Gar91] G. A. Carpenter and S. Grossberg, Pattern Recognition by Self-organizing Neural Networks, ED., The MIT Press, Cambridge, Massachusetts, pp. 397-423, 1991.
    [Glo94] B. Gloyer, H. K. Aghajan, K. Y. Siu, and T. Kailath, “Vehicle Detection and Tracking for Freeway Traffic Monitoring,” Signals, Systems and Computers, Vol. 2, pp. 970 -974, 1994.
    [Gra96] V. Graefe, and W. Efenberger, “A Novel Approach for the Detection of Vehicles on Freeways by Real-Time Vision,” Proc. Intelligent Vehicles, pp. 1-6, 1996.
    [Kal98] T. Kalinke, C. Tzomakas, and W. von Seelen, “A Texture-Based Object Detection and an Adaptive Model-Based Classification,” Proc. IEEE Intelligent Vehicles Symposium, Stuttgart, Germany, pp. 341-346, 1998.
    [Kue98] A. Kuehnle, “Symmetric-based Vehicle Location for AHS,” Proc. SPIE – Transportation Sensors and Controls: Collision Avoidance. Traffic Management, and ITS, Vol. 2902, Orlando, USA, pp. 19-27, 1998.
    [Kyo99] S. Kyo, T. Koga, K. Sakurai, and S. Okazaki, “A Robust Vehicle Detection and Tracking System for Wet Weather Conditions using the IMAP-Vision Image Processing Board,” Proc. IEEE Intelligent Transportation System, Tokyo, Japan, pp. 423-428, 1999.
    [Liu00] Z. Q. Liu, X. Li, and K. M. Leung, “Detection of Vehicles from Traffic Scene Using Fuzzy Integrals,” Pattern Recognition, Vol. 35, issue 4, April 2000.
    [Lüt98] M. Lützeler and E. D. Dickmanns, “Road Recognition with MarVEye,” Proc. IEEE Intelligent Vehicles Symposium, Stuttgart, Germany, pp. 341-346, 1998.
    [Mar91] C. Martindale, “Cognitive Psychology, A Neural-Network Approach”, Brooks/Cole Publishing Company, Pacific Grove, California, pp. 95-116, 1991.
    [Mor99] H. Morizane, H. Takenaga, Y. Kobayashi, and K. Nakamura, “Cut-in Vehicle Recognition System,” Proc. IEEE Intelligent Transportation Systems, pp. 976-980, 1999.
    [Naa94] K. Naab and G. Reichart, “Driver Assistance Systems for Lateral and Longitudinal Vehicle Guidance -- Heading Control and Active Cruise Control,” in Proc. AVEC, pp. 449-454, 1994.
    [Rei95] G. Reichart, R. Haller, and K. Naab, “Towards Future Driver Assistance Systems,” Automotive Technology International, pp. 25-29, 1995.
    [Sch00] M. Schraut, K. Naab, and T. Bachmann, “BMW's Driver Assistance Concept for Integrated Longitudinal Support,” The 7th Intelligent Transport Systems World Congress, No. 2121, Turin, Italy, 2000.
    [Smi98] S. M. Smith, “ASSET-2: Real-time Motion Segmentation and Object Tracking,” Real Time Imaging Journal, Vol. 4, pp. 21-40, 1998.
    [Sun02] Z. Sun, G. Bebis, and R. Miller, “On-road Vehicle Detection Using Gabor Filters and Support Vector Machines,” Proc. IEEE Digital Signal Processing, Vol. 2, pp. 1019-1022, 2002.
    [Ven99] P. Venhovens, J. Bernasch, J. Löwenau, H. Rieker, and M. Schraut, “The Application of Advanced Vehicle Navigation in BMW Driver Assistance Systems,” SAE Congress, Detroit, USA, 1999.
    [Wan02] B. Li, R. Wang, and K. Guo, “Vehicle Detection Based on Distributed Sensor Decision Fusion”, Proc. IEEE Intelligent Transportation Systems, pp. 242-247, 2002.
    [Wu99] D. Wu, X.Ye, and W. Gu, “Tracking Vehicles in Image Sequence for Avoiding Obstacles,” International Conference on Image Analysis and Processing, pp. 286-290, 1999.
    [Yan97] R. Yang, F. Cabestaing, and J. G.. Postaire, “Obstacle Detection in a Sequence of Rearview Images,” Proc. IEEE Information, Communications and Signal Processing, Vol. 1, pp. 200-204, 1997.

    QR CODE