博碩士論文 945201028 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:22 、訪客IP:18.116.90.57
姓名 許文財(Wen-Tsai Sheu)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於多模型背景維持之前景物件偵測及其數位訊號處理器實現
(Foreground Object Detection based on Multi-model Background Maintenance and Its DSP Implementation)
相關論文
★ 即時的SIFT特徵點擷取之低記憶體硬體設計★ 即時的人臉偵測與人臉辨識之門禁系統
★ 具即時自動跟隨功能之自走車★ 應用於多導程心電訊號之無損壓縮演算法與實現
★ 離線自定義語音語者喚醒詞系統與嵌入式開發實現★ 晶圓圖缺陷分類與嵌入式系統實現
★ 語音密集連接卷積網路應用於小尺寸關鍵詞偵測★ G2LGAN: 對不平衡資料集進行資料擴增應用於晶圓圖缺陷分類
★ 補償無乘法數位濾波器有限精準度之演算法設計技巧★ 可規劃式維特比解碼器之設計與實現
★ 以擴展基本角度CORDIC為基礎之低成本向量旋轉器矽智產設計★ JPEG2000靜態影像編碼系統之分析與架構設計
★ 適用於通訊系統之低功率渦輪碼解碼器★ 應用於多媒體通訊之平台式設計
★ 適用MPEG 編碼器之數位浮水印系統設計與實現★ 適用於視訊錯誤隱藏之演算法開發及其資料重複使用考量
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在大部分電腦視覺應用領域例如:影音監視、交通監控、人類移動擷取以及人機介面的互動裡,在某個場景中前景物件偵測通常被稱作為背景相減並且是一個關鍵性前處理的步驟。背景相減是一個廣泛被使用於從現時畫面與參考畫面的差異所得到的移動物件偵測的方法,其中參考畫面又稱之為背景影像或背景模型。在基本的原則裡,背景影像必定表示一張無移動物件之景像並且保持有規則的背景更新於照明的變化條件及在介紹裡所提及的一些問題。因此如何維持一張背景影像是非常重要的議題。
在此論文中,為了獲得含有以上所述的一些問題的精確之前景物件偵測,一個多模型背景維持演算法被提出。此多模型背景維持的架構包含兩個主要特徵去重建一張實際含有時間變化背景改變之背景影像。在這個架構下,此背景影像由每個像素最具意義、一再發生的特徵與主要特徵所表示。主要特徵由靜態與動態特徵組成來表示背景像素。此多模型背景維持包含兩個主要的步驟:背景維持與前景萃取。實驗顯示提出之方法在不同的連續畫面提供好的結果。量化評估及比較其已存在的方法顯示提出之方法提供改善較佳的結果且具有較低的運算複雜度。最後我們使用IEKC64x平台去實現多模型背景維持演算法來獲得即時前景物件偵測。
摘要(英) Foreground object detection in a scene, often referred to as “background subtraction”, is a critical early in step in most computer vision applications in domains such as video surveillance, traffic monitoring, human motion capture and human-computer interaction. Background subtraction is a widely used approach for detecting moving objects from the difference between the current frame and a reference frame, often called the “background image”, or “background model”. As a basic, the background image must be a representation of the scene with no moving objects and must be kept regularly updated so as to adapt to the varying luminance conditions and some problems described in the introduction. For this reason, how to maintain a background image is very important issue.
In the thesis, in order to acquire accurate foreground object detection with above some problems, a Multi-model Background Maintenance (MBM) algorithm is proposed. A MBM framework contains two principal features to construct a practice background image with time-varying background changes. Under this framework, the background image is represented by the most significant and recurrent features, the principal features at each pixel. Principal features consist of static and dynamic features to represent background pixels. A MBM includes two major procedures, background maintenance and foreground extraction. Experiments show proposed method provides good results on different kinds of sequences. Quantitative evaluation and comparison with the existing method show that the proposed method provides much improved results with lower complexity. Finally, we use IEKC64x platform to implement MBM algorithm for obtaining real time foreground object detection.
關鍵字(中) ★ 背景維持
★ 多模型高斯分佈
★ 背景影像
★ 主要特徵
★ 多模型背景維持
關鍵字(英) ★ principal features
★ background maintenance
★ background image
★ multi-model Gaussian distribution
★ multi-model background maintenance (MBM)
論文目次 Content................................................iv
List of Figures........................................vi
List of Tables.........................................viii
Chapter 1 Introduction...................................1
1.1 Introduction...................................2
1.2 Thesis Organization............................5
Chapter 2 Background and Relative Research...............7
2.1 Background: a review of background subtraction.8
2.2.1 Nonparametric Approach......................10
2.2.2 Parametric Approach.........................13
Chapter 3 Proposed Multi-model Background Maintenance Algorithm................................................16
3.1 Overview of Proposed Algorithm.................17
3.1.1 Design Strategy.............................17
3.1.2 Flowchart of Proposed Algorithm.............19
3.2 Background Maintenance.........................20
3.2.1 Change Classification.......................21
3.2.2 Learning and Updating for Dynamic Change....22
3.2.3 Learning and Updating for Static Point......23
3.3 Foreground Extraction..........................25
Chapter 4 Experimental Result and Analysis...............26
4.1 Visual Interpretation..........................27
4.2 Quantitative Evaluations.......................33
4.3 Computation Complexity and Run-time Analysis...34
Chapter 5 Introduction of DSP Platform and DSP Realization of Our Proposed Algorithm................................37
5.1 Introduction to ATEME IEKC64x Platform.........38
5.1.1 The TI TMS320C6146 DSP Chip.................39
5.1.2 Central Processing Unit.....................40
5.1.3 Memory......................................42
5.2 TI TMS320C6416 DSP Features for Optimization...43
5.2.1 Introduction to the Code Composer Studio Development Tools........................................43
5.2.2 Code Optimization Flow......................45
5.2.3 Compiler Optimization Options...............46
5.3 Implement Our Proposed Algorithm...............50
5.3.1 Simulation Environment......................50
5.3.2 Implementation and Acceleration of Our Proposed MBM Algorithm on TI TMS320C6416 DSP.............52
5.3.3 Experimental Result on DSP Implementation and Acceleration.............................................61
5.3.4 Profiling Analysis on DSP Implementation and Acceleration.............................................62
Chapter 6 Conclusion.....................................63
Reference................................................65
參考文獻 [1] D. Gavrila, “The visual analysis of human movement: A survey,” Computer. Vis. Image Understanding, vol. 73, no. 1, 1999, pp. 82-98.
[2] E. Durucan and T. Ebrahimi, “Change detection and background extraction by linear algebra,” Proc. IEEE, vol.89, Oct. 2001, pp. 1368-1381.
[3] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers“Wallflower: Principlesand practice of background maintenance,” in Proc. IEEE Int. Conf.Computer Vision, Sept. 1999, pp. 255–261.
[4] M. Harville, G. Gordon, and J. Woodfill, “Foreground segmentation using adaptive mixture model in color and depth,” in Proc. IEEE Workshop Detection and Recognition of Events in Video, July 2001, pp.3–11.
[5] C. Stauffer and W.E.L. Grimson, “Adaptive Background Mixture Models for Real-Time Tracking,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 2, 1999, pp. 246-252.
[6] R. Jain and H. Nagel, “On the Analysis of Accumulative Difference Pictures from image Sequences of Real World Scenes,” IEEE Trans. Pattern Analysis and Machine Intelligence, 1979.
[7] A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target classification and tracking from real-time video,” in Proc. IEEE Workshop Application of Computer Vision, Oct. 1998, pp. 8–14.
[8] L. Li and M. Leung, “Integrating intensity and texture differences for robust change detection,” IEEE Trans. Image Processing, vol. 11, Feb. 2002, pp. 105–112.
[9] C. Wren, A. Azarbaygaui, T. Darrell, and A. Pentland, “Pfinder: realtime tracking of the human body,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, July 1997, pp. 780–785.
[10] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russel, “Toward robust automatic traffic scene analysis in real-time,” in Proc. Int. Conf. Pattern Recognition, 1994, pp. 126–131.
[11] K.-P. Karmann, A. Brandt, and R. Gerl, “Using Adaptive Tracking to classify and Monitor Activities in a site,” Time Varying Image Processing and Moving Object Recognition, 1990.
[12] N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” in Proc. 13th Conf. Uncertainty Artificial Intelligence, 1997.
[13] P. KaewTraKulPong and R. Bowden, “An Improved Adaptive Background Mixture Model for Real-Time Tracking with Shadow Detection,” Proc. European Workshop Advanced Video Based Surveillance Systems, 2001.
[14] Z. Zivkovic, “Improved Adaptive Gaussian Mixture Model for Background Subtraction,” Proc. Int’l Conf. Pattern Recognition, vol. 2, 2004, pp. 28-31.
[15] Q. Zang and R. Klette, “Robust Background Subtraction and Maintenance,” Proc. Int’l Conf. Pattern Recognition, vol. 2, 2004, pp. 90-93.
[16] Dar-Shyang Lee, “Effective Gaussian mixture learning for video background subtraction” IEEE Trans. Pattern analysis and machine intelligence, vol.27, no.5, MAY 2005, pp. 827-832.
[17] A. Elgammal, D. Harwood, and L. Davis, “Background and Foreground Modeling Using Nonparametric Kernel Density Estimation for Visual Surveillance,” Proc. IEEE, 2002.
[18] K. Kim, T.H. Chalidabhongse, D. Harwood, and L. Davis, “Background Modeling and Subtraction by Codebook Construction,” Proc. IEEE Int’l Conf. Image Processing, vol. 5, 2004, pp. 3061-3064.
[19] Y. Sheikh and M. Shah. Bayesian object detection in dynamic scenes. IEEE Conf. Computer Vision and Pattern Recognition, San Diego, CA, June 2005.
[20] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detecting moving objects, ghosts, and shadows in video streams,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1337-1342, 2003.
[21] B. Shoushtarian and H. E. Bez, “A practical adaptive approach for dynamic background subtraction using an invariant colour model and object tracking,” Pattern Recognition Letters, vol. 26, no. 1, pp. 5-26, 2005.
[22] I. Haritaoglu, D. Harwood, and L. S. Davis, “ : Real-time surveillance of people and their activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 809-830, 2000.
[23] “TMS320C6000 CPU and Instruction Set Reference Guide,” Texas Instruments, Dallas, TX, Literature Number spru189F, October 2000.
[24] “TMS320C6000 Programmer’s Guide,” Texas Instruments, Dallas, TX, Literature Number spru198, August 2002.
[25] “TMS320C6000 Optimizing Compiler User’s Guide” Texas Instruments, Dallas, TX, Literature Number spru1871L, MAY 2004.
[26] F. Meyer and S. Beucher, “Morphological segmentation,” J. Visual Commun. Image Representation, vol.1, pp. 21–46, Sept. 1990.
[27] Luc Vincent,”Morphological Grayscale Reconstruction in Image Analysis: Application and Efficient Algorithms.” IEEE Transactions on Image Processing, vol. 2, no. 2, April 1993.
[28] “TMS320C64x Image/Video Library Programmer's Reference” Texas Instructments, TX, Dallas, 2002.
[29] “TMS320C6000 Optimizing Compiler User's Guide“Texas Instructments, TX, Dallas, 2004.
指導教授 蔡宗漢(Tsung-han Tsai) 審核日期 2007-7-16
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明