博碩士論文 93541010 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:46 、訪客IP:3.15.29.73
姓名 林崇元(Chung-Yuan Lin)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 智慧型監控系統的物件偵測與物件壓縮技術開發及設計
(Design of Object Detection and Object Compression for Intelligent Surveillance System)
相關論文
★ 即時的SIFT特徵點擷取之低記憶體硬體設計★ 即時的人臉偵測與人臉辨識之門禁系統
★ 具即時自動跟隨功能之自走車★ 應用於多導程心電訊號之無損壓縮演算法與實現
★ 離線自定義語音語者喚醒詞系統與嵌入式開發實現★ 晶圓圖缺陷分類與嵌入式系統實現
★ 語音密集連接卷積網路應用於小尺寸關鍵詞偵測★ G2LGAN: 對不平衡資料集進行資料擴增應用於晶圓圖缺陷分類
★ 補償無乘法數位濾波器有限精準度之演算法設計技巧★ 可規劃式維特比解碼器之設計與實現
★ 以擴展基本角度CORDIC為基礎之低成本向量旋轉器矽智產設計★ JPEG2000靜態影像編碼系統之分析與架構設計
★ 適用於通訊系統之低功率渦輪碼解碼器★ 應用於多媒體通訊之平台式設計
★ 適用MPEG 編碼器之數位浮水印系統設計與實現★ 適用於視訊錯誤隱藏之演算法開發及其資料重複使用考量
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 新興的智慧型監控系統試圖利用視覺型分析任務來瞭解與預測監控場景裡所發生的事件,並以此來達到對廣大區域的自動監控。在所提供的視覺型分析任務裡,偵測前景物件是一件初期的且決定性的任務。前景偵測的目的在於將影像裡感興趣的前景區域與不感興趣的背景區域分離。如此、藉由分析影像裡的前景區域,監控系統可以自動的瞭解前景物件在影像裡的行為。隨著硬體製程在發展上趨於成熟,高解析度的攝影機在目前已經是相當的普及。然而,隨著畫面裡像素數量的增加,將前景偵測技術運用於高解析度攝影機上將會導致極高的運算量,並且增加整個監控系統的硬體成本。而此成本也將隨著系統裡所架設攝影機的數量上升而上升。本論文以偵測的強韌性及運算的複雜性為觀點,對前景偵測進行探討並且提出可能的解決方案。
首先,針對小型監控系統,我們提出適用於數位訊號處理器之前景偵測法。所提出演算法的特性為:預測空間域資料的相關性,並根據預測的相關性來設置與運算量有關的參數來減少不必要的運算。我們也利用數位訊號處理器上的硬體資源,提出了具適應性之畫面率控制機制。此一機制可自動偵測具多攝影機之監控系統上的前景偵測運算量,來調整空間域資料相關性的參數來維持可達到即時處理的表現。如此,硬體成本將不會隨著攝影機數量的增加而上升。我們將前景偵測實現於在可進行驗證的硬體平台。僅需要單一顆數位訊號處理器即可對16隻CIF畫面大小的攝影機同時進行前景偵測。
其次、針對大型監控系統,我們設計了低複雜度的前景偵測法。而首要的設計考量為忽略移動的背景同時偵測出移動的前景。本文提出了利用人機在物件層級的互動機制來達到此一目標。我們提出的互動機制可改變移動物件被視為前景物件的條件。該條件會隨著監控場景的不同而改變,並且由使用者透過人機介面進行互動而產生出。採用此機制後,僅需利用低複雜度前景偵測法即可達到良好的偵測率。我們也提出了基於系統晶片架構之處理器設計來實現所提出的演算法。提出的處理器可達到每秒即時處理30張HD720畫面的運算速度。最大輸出率可提昇至32.707 MPixels/s。
第三、我們提出附屬的模式來強化在複雜環境下的偵測品質。根據研究在複雜環境下的時間空間域的機率密度函數,我們發現任一特定區域會存在一個可識別的機率密度函數分佈。而此機率密度函數分佈可以利用一簡單的背景模型來獲得。以此結果、我們提出強化的前景偵測演算法,其利用可識別的機率密度函數分佈計算出區域性的似然率斜率測試(likelihood ratio test)來進一步區別有著明顯移動的背景與移動前景。測試裡所用到的臨界值皆為自動產生,且會隨著影像內容的變化而調整。量化的評估與比較結果顯示,我們的方法較目前最先進的演算法可提供更為精準的偵測率。
最後、我們提出了以物件為主的編碼方式,有效率的在監控網路上傳輸視訊物件至各種不同的裝置上。提出的編碼方式利用了取決於前景及背景內容的多餘性來進行更有效率的壓縮。根據混合高斯的背景模型,我們將編碼區塊以取決於影像內容的多餘性進行分類。因此,編碼區塊的移動向量預測只會對真正涉及到移動的區塊進行運算。為對不同種類的區塊進行編碼,我們推演出雙迴路式的編碼方式。實驗結果顯示,相較於MPEG-4及其他以物件為主式的編碼方式,雙迴路式的編碼方式可達到更高的編碼效率,同時顯著的降低整體的編碼複雜度。
摘要(英) The emerging intelligent video surveillance attempts to provide vision-based analysis tasks to understand and predict the actions in the field of view for automated wide-area surveillance. Among the vision-based analysis tasks, detecting visual foreground object is an early and crucial vision task. The foreground detection separates the interested visual object from the background. By analyzing the detected visual objects in a scene, automatically understanding actions can be achieved in a surveillance system. Due to the progress in hardware technology scaling that realizes high resolution sensors, applying foreground detection to surveillance system often leads to high computational load and increases the cost of entire system when a mass deployment of end cameras in needed. This thesis explores the foreground detection in the perspective on detecting robustness and the perspective on computational complexity, and contributes three key techniques to surveillance scenario.
First, a DSP-based foreground detection solution for small scale multiple cameras surveillance system is presented. The algorithm incorporates a temporal data correlation predictor which can exhibit the correlation between data and reduce computation based on this correlation. With the DSP-oriented foreground detection, an adaptive frame rate control is developed as a low cost solution for such surveillance system. The adaptive frame rate control automatically detects the computational load of foreground detection on multiple video sources and adaptively tunes the temporal data correlation predictor to meet the real-time specification. Therefore, no additional hardware cost is required when the number of deployed cameras is increased. The presented approach has been validated on a demonstration platform. Performance can achieve 30 CIF frames processing per second for a 16-camera surveillance system by single-DSP chip.
Second, a low cost foreground detection solution for distributed surveillance system is presented. The primary issue is to tolerate background motions while detecting foreground motions in dynamic scene. This thesis presents the human-machine interaction in object level scheme. This scheme can vary the conditions for a moving object been regarded as a foreground object. The conditions are depending on a scene and are derived from the information from human-machine interaction. With such scheme, adopting the simple algorithm can achieve well foreground detection with significant background motions. A processor based on system-on-chip design is also presented for the human-machine interaction in object level based foreground detection. The detecting capability of the processor reaches HD720 at 30 Hz. The maximum throughput can be up to 32.707 MPixels/s.
Third, an auxiliary mode is presented to further enhance the foreground detection quality in a complex environment. A study of spatiotemporal probability density functions of the background and the foreground in a complex scene supports the assertion that a discernible probability density function exists in a particular spatiotemporal region, and that the discernible probability density function can be effective learned using a simple background model. An enhanced algorithm that is based on regional likelihood ratio test and exploits discernible probability density functions is proposed. The thresholds used in the test are automatically estimated and adapted to the context of the video sequence. Quantitative evaluation and comparison with state-of-the-art approaches show that the presented algorithm provides much improved results.
Finally, the object-based video coding scheme is presented to efficiently transmit visual object data to various devices such as storage device, server, and remote client through the network. Contextual redundancy associated with background and foreground objects in a scene is exploited. With a mixture-of-Gaussian background model, a method is presented to classify macroblock according to the type of contextual redundancy. The motion search is only performed on the specific type of context of MB that really involves interested motion. To facilitate the encoding by context of macroblock, an improved object-based coding architecture, namely dual-closed-loop encoder, is derived. The presented coding framework can achieve higher coding efficiency than MPEG-4 and related object-based coding approaches, while significantly reducing coding complexity.
關鍵字(中) ★ 前景偵測
★ 視訊壓縮
★ 監控系統
關鍵字(英) ★ surveillance system
★ video compression
★ foreground detection
論文目次 CHAPTER 1
INTRODUCTION 1
1.1 Intelligent Surveillance System 1
1.2 Introduction to Foreground Object Detection 4
1.3 Motivation 6
1.4 Thesis Organization 10
CHAPTER 2
RELATED WORKS AND DESIGN CONSIDERATIONS 11
2.1 Background Subtraction Approach 12
2.1.1 Background Modeling Method 12
2.1.2 Decision Method 14
2.2 Perspective on Computational Complexity 16
2.3 Design Considerations 18
CHAPTER 3
MULTIPLE BACKGROUND MAINTENANCE ALGORITHM 22
CHAPTER 4
DESIGN OF FOREGROUND DETECTION FOR DSP-BASED MULTIPLE CAMERA SURVEILLANCE 28
4.1 Motivation 28
4.2 TDCP-based Foreground Detection Algorithm 29
4.2.1 Exploit Temporal Correlation 30
4.3 System Design with DSP-based Platform 32
4.3.1 Data Path of DSP 34
4.3.2 Data Independency Procedure 34
4.3.3 Parallel Procedure 38
4.3.4 Memory Usage 39
4.3.5 Frame Rate Control 39
4.4 Experimental Result 42
4.4.1 Parameters and Performance Measures 42
4.4.2 Detection performance evaluations 43
4.4.3 System Performance Evaluations 48
CHAPTER 5
DESIGN OF FOREGROUND DETECTION USING SYSTEM-ON-CHIP APPROACH 51
5.1 Motivation 51
5.2 HMIiOL-based Foreground Detection 51
5.2.1 Constraint for Object Appearance 53
5.2.2 Constraint for Object Size 54
5.3 Architecture Design for HMIiOL-based Foreground Detection 56
5.3.1 System-level Design Consideration 57
5.3.2 Data Path of Accelerators 60
5.3.3 MBM Accelerator 63
5.3.4 CCA Pass-1 Accelerator 65
5.3.5 OR1200 Processor 67
5.4 Experimental Result 69
5.4.1 Foreground Detection Performance 69
5.4.2 Performance Evaluation and Comparison on Hardwired Foreground Detection 74
CHAPTER 6
REGIONAL LIKELIHOOD RATIO TEST FOR ENHANCED FOREGROUND DETECTION 78
6.1 Motivation 78
6.2 Formulation of Regional Likelihood Ratio Test 80
6.3 Enhanced Mode for Foreground Detection 86
6.3.1 Automated Estimation of Global Threshold 87
6.3.2 Adapted Local Test 89
6.4 Experimental Result 94
6.4.1 Enhanced Foreground Detection Result 95
6.4.2 Quantitative Evaluation 98
6.4.3 Demonstration of Robustness of Detection Performance 100
CHAPTER 7
DESIGN OF OBJECT-BASED VIDEO CODING FOR INTELLIGENT SURVEILLANCE SYSTEM 103
7.1 Motivation 103
7.2 Exploiting MoG Model for Video Surveillance Coding 106
7.2.1 Context Determination of MB 108
7.2.2 The Dual-closed-loop Encoding Approach 110
7.3 Experimental Result 115
7.3.1 Rate-distortion Performance 116
7.3.2 Evaluation Advantage of Exploiting MoG Model 121
7.3.3 System Level Complexity Analysis 125
CHAPTER 8
CONCLUSIONS 127
REFERENCE 130
參考文獻 [1] SIW Editorial Staff, Integrator Unveils Security System for Wynn Casino. [Online]. Available: http://www.securityinfowatch.com/article/article.jsp?siteSection=344&id=9566
[2] T. Datz, What Happens in Vegas Stays on Tape. [Online]. Available: http://www.csoonline.com/read/090105/hiddencamera_vegas_3834.html
[3] Automated analysis of nursing home observations, Pervasive Comput., vol. 3, no. 2, pp. 15–21, 2004.
[4] Philips Lifeline. [Online]. Available: http://www.lifelinesys.com/
[5] CNN, Smart Cameras Spot Shady Behavior. [Online]. Available: http://edition.cnn.com/2007/TECH/science/03/26/fs.behaviorcameras/index.html
[6] M. Valera and S. Velastin, “Intelligent distributed surveillance systems: A review,” in IEEE Proc. Vis. Image, Signal Process., vol. 152, no. 2, pp. 192–204, Apr. 2005.
[7] R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process., vol. 14, no. 3, pp. 294–307, Mar. 2005.
[8] A. Yilmaz, O. Javed and M. Shah, “Object Tracking: A Survey,” ACM Journal of Computing Surveys, vol. 38, No. 4, 2006.
[9] G. L. Foresti, “Object Recognition and tracking for remote video surveillance,” IEEE Trans. Circuits Syst. Video Technol., vol. 9, no. 7, Oct. 1999.
[10] D.-Y. Chen, K. Cannons, H.-R. Tyan, S.-W. Shih, and H.-Y. M. Liao, “Video-based human movement analysis and its application to surveillance systemss,” IEEE Trans. Multimedia, vol. 10, no. 3, pp. 372–384, Apr. 2008.
[11] W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 34, no. 3, pp. 334–352, Aug. 2004.
[12] M. M. Trivedi, T. L. Gandhi, and K. S. Huang, “Distributed interactive video arrays for event capture and enhanced situational awareness,” IEEE Intell. Syst., Oct. 2005.
[13] M. Quaritsch, M. Kreuzthaler, B. Rinner, H. Bischof, and B. Strobl, “Autonomous multicamera tracking on embedded smart cameras,” EURASIP J. Embed. Syst., 2007.
[14] S. Fleck and W. Straber, “Smart camera based monitoring system and its application to assisted living,” in Proc. IEEE, vol. 96, no. 10, pp. 1698–1714, Oct. 2008.
[15] D. A. Migliore, M. Matteucci, and M. Naccari, “View-based detection and analysis of periodic motion,” in Proc. 4th ACMInt.Workshop VideoSurveill. Sensor Netw., Oct. 2006, pp. 215–218.
[16] S.-Y. Chien, S.-Y. Ma, and L.-G. Chen, “Efficient moving object segmentation algorithm using background registration technique,” IEEE Trans. Circuits Syst. Video Technol., vol. 12, no. 7, pp. 577–586, Dec. 2002.
[17] W.-K. Chan and S.-Y. Chien, “Real-time memory-efficient video object segmentation in dynamic background with multi-background registration technique,” in Proc. IEEE Multimedia Signal Process. Workshop, Crete, Greece, Oct. 2007, pp. 219–222.
[18] P. Rosin and E. Ioannidis, “Evaluation of global image thresholding for change detection,” Pattern Recognit. Lett., vol. 24, no. 14, pp. 2345–2356, Oct. 2003.
[19] L. di Stefano, S. Mattoccia, and M. Mola, “Achange-detection algorithm based on structure and color,” in Proc. IEEE Conf. Advanced Video and Signal-Based Surveillance, 2003, pp. 252–259.
[20] T. Aach and A. Kaup, “Bayesian algorithms for change detection in image sequences using Karkov random fields,” Sig. Proc: Image Comm., 7(2): 56–61, 1995.
[21] L. Bruzzone and D. F. Prieto, “Automatic analysis of the difference image for unsupervised change detection,” IEEE Trans. Geosci. Remote Sens., vol. 38, no. 3, pp. 1171–1182, May 2000.
[22] L.Wixson, “Detecting salient motion by accumulating directionary-consistent flow,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 774–780, Aug. 2000.
[23] K. E. Matthews and N. M. Namazi, “A Bayes Decision decision test for detecting uncovered-background and moving pixles in image sequences,” IEEE Trans. Image Processing, vol. 7, no. 5, pp. 720–728, May. 1998.
[24] S. C. Liu, C.W. Fu, and S. Chang, “Statistical change detection with moments under time-varying illumination,” IEEE Trans. Image Processing, vol. 7, pp. 1258–1268, Sept. 1998.
[25] L. Li and M. Leung, “Integrating intensity and texture differences for robust change detection,” IEEE Trans. Image Processing, vol. 11, pp. 105–112, Feb. 2002.
[26] M. Piccardi, “Background subtraction techniques: a review,” in Proc. IEEE Int. Conf. Systems, Man, Cybernetics, 2004, pp. 3099–3104.
[27] Y. Benezeth et al., “Review and evaluation of commonly-implemented background subtraction algorithm,” in Proc. Int. Conf. Pattern Recognition, 2008, pp. 1–4.
[28] Jacinto C. N. and Jorge S. M., “Performance evaluation of object detection algorithms for video surveillance,” IEEE Trans. Multimedia, vol. 10, no. 3, pp. 372–384, Apr. 2008.
[29] Y. Benezeth, et al., “Comparative study of background subtraction algorithms,” Journal of Electronic Imaging., vol. 19, no. 3, 2010.
[30] N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” in Proc. of 13th Conf. on Uncertainty in Artificial Intelligence, Aug.1-3, 1997.
[31] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1999, pp. 246–252.
[32] Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” Proc. Int’l Conf. Pattern Recognition., vol. 2, 2004, pp. 28-31.
[33] Q. Zang and R. Klette, “Robust background subtraction and maintenance,” Proc. Int’l Conf. Pattern Recognition, vol. 2, pp. 90-93, 2004.
[34] D.-S. Lee, “Effective Gaussian mixture learning for video background subtraction,” IEEE Trans. Pattern Anal. Machine Intell., vol.27, no.5, pp. 827-832, May 2005.
[35] O. Javed, K. Shafique, and M. Shah, “A hierarchical approach to robust background subtraction using color and gradient information,” in Proc. IEEE Workshop Motion Video Computing, Dec. 2002, pp. 22–27.
[36] P.-M. Jodoin, M. Mignotte, and J. Konrad, “Statistical Background subtraction using spatial cues,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 12, pp. 1758–1763, Dec. 2007.
[37] H.-L. Eng, J. Wang, Alvin H. K. S. Wah, and W.-Y. Yau, “Robust human detection within a highly dynamic aquatic environment in real time,” IEEE Trans. Image Process., vol. 15, no. 6, pp. 1583–1600, Jun. 2006.
[38] C. Benedek and T. Szirányi, “Bayesian foreground and shadow detection in uncertain frame rate surveillance videos,” IEEE Trans. Image Processing, vol. 17, no. 4, pp. 608–621, Apr. 2008.
[39] T.-H. Tsai, W.-T. Sheu and C.-Y. Lin, "Foreground Object Detection based on Multi-model Background Maintenance,” IEEE International Symposium on Multimedia, Taiwan, 2007.
[40] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: Principlesand practice of background maintenance,” in Proc. IEEE Int. Conf. Computer Vision, Sept. 1999, pp. 255–261.
[41] M. Heikkila and M. Pietikainen, “A texture-based method for modeling the background and detection moving object,” IEEE Trans. Pattern Anal. Machine Intell., vol. 28, no. 4, pp. 657–662, Apr 2006.
[42] S. Zhang, H. Yao, and S. Liu, “Dynamic background modeling and subtraction using spatio-temporal local binary patterns,” in Proc. of IEEE Int. Conf. Image Process., Sep 2008.
[43] B. Li, B. Yuan, and Z. Miao, “Moving object detection in dynamic scenes using nonparametric local kernel histogram estimation,” in Proc. IEEE Int. Conf. Multimedia and Expo, pp. 1461–1464, Jun. 2008.
[44] L. Li, W. Huang, I. Y.-H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Trans. Image Process., vol. 13, no. 11, pp. 1459–1472, Nov. 2004.
[45] A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proc. IEEE, vol. 90, no. 7, pp. 1151–1162, Jul. 2002.
[46] A. Mittal and N. Paragios, “Motion-Based Background Subtraction Using Adaptive Kernel Density Estimation,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2004.
[47] Y. Sheikh and M. Shah, “Bayesian object detection in dynamic scenes,” IEEE Trans. Pattern Anal. Machine Intell., vol.27, no.11, pp. 1778-1792, Nov. 2005.
[48] S. Liao, G. Zhao, V. Kellokumpu, M. Pietikainen, and S. Z. Li, “Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2010, pp. 1301–1306.
[49] L. Maddalena and A. Petrosino, “A self-organizing approach to background subtraction for visual surveillance applications,” IEEE Trans. Image Process., vol. 17, no. 7, pp. 1168–1177, July. 2008.
[50] T. Morimoto, et al,“An FPGA-Based Region-Growing Video Segmentation System with Boundary-Scan-Only LSI Architecture,” in Conf. IEEE Asia Pacific Circuits and Systems, APCCAS., pp. 944 – 947, Dec. 2006.
[51] J. Kim and T. Chen, “A VLSI architecture for video-object segmentation,” IEEE Trans. Circuits Syst. Video Technol., vol. 13, no. 1, pp. 83–96, Jan. 2003.
[52] S.-Y. Chien, et al, “Single chip video segmentation system with a programmable PE array,” IEEE Asia-Pacific Conference., pp. 233-236, Aug. 2002.
[53] W.-K. Chan, et al, “Efficient content analysis engine for visual surveillance network,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 5, pp. 693–703, May. 2009.
[54] D.-Z. Peng, C.-Y. Lin, W.-T. Sheu, and T.-H. Tsai, “A low cost and low complexity foreground object segmentation architecture design with multi-model background maintenance algorithm, ” in Proc. IEEE Int. Conf. Image Process., Cairo Egypt, 2009.
[55] H. Jiang, H. Ardo, and V. Owall, “A hardware architecture for real-time video segmentation utilizing memory reduction techniques,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 2, pp. 226–236, Feb. 2009.
[56] M. Valera and S. Velastin, “Intelligent distributed surveillance systems: A review,” in IEEE Proc. Vis. Image, Signal Process., vol. 152, no. 2, pp. 192–204, Apr. 2005.
[57] Y. Charfi, N. Wakamiya, and M. Murata,”Challenging issues in visual sensor networks,” IEEE Wireless Communications. Mag., vol. 16, no. 2, pp. 44–49, Apr. 2009.
[58] R. M. Neal and G. E. Hinton, “A view of the EM algorithm that justifies incremental, sparse, and other variants,” Learning in graphical models, MIT Press, Cambridge, MA, 1999.
[59] M.A. Sato and S. Ishii, “Online EM algorithm for the normalized Gaussian network,” Neural Computation, vol. 12, pp. 407-432, 1999.
[60] P. Salembier and M. Pardas, “Hierarchical morphological segmentation for image sequence coding,” IEEE Trans. Image Process., vol. 3, no. 5, pp. 639–651, Sep. 1994.
[61] TMS320C64x/C64x+ DSP CPU and Instruction Set Reference Guide, SPRU732GE, Feb. 2008, http://www.ti.com
[62] TMS320C64x/C64x+ DSP Image/Video Processing Library Programmer’s Guide, SPRUF30A, Oct. 2007, http://www.ti.com
[63] TMS320C64x DSP Two-Level Internal Memory Reference Guide, SPRU610B, Aug. 2004, http://www.ti.com
[64] TMS320C6000 DSP 32-Bit Timer Reference Guide, SPRU582B, Jan. 2005, http://www.ti.com
[65] C. Dumontier, F. Luthon, and J.-P. Charras, “Real-time DSP implementation for MRF-based video motion detection,” IEEE Trans. Image Process., vol. 8, no. 10, pp. 1341–1347, Oct. 1999.
[66] S. P. Ierodiaconou, N. Dahnoun, and L. O. Xu, “Implementation and optimisation of a video object segmentation algorithm on an embedded DSP platform,” in Institution of Engineering and technology conf. Crime and Security., 2006.
[67] C.-Y. Lin, S.-Y. Li, and T.-H. Tsai, “A scalable parallel hardware architecture for connected component labeling, ” in Proc. IEEE Int. Conf. Image Process., Hong Kong, 2010.
[68] J. Detrey and F. de Dinechin,“A parameterized floating-point exponential function for FPGAs,” In IEEE International Conference on Field-Programmable Technology (FPT’05). IEEE Computer Society Press, Dec. 2005.
[69] J. Detrey F. de Dinechin X. Pujol “Return of the hardware floating-point elementary function,”18th IEEE Symposium on Computer Arithmetic (ARTH’07), pp. 161-168, June 2007.
[70] OpenRISC 1200 IP Core Specification, http://opencores.org/svnget,or1k?file=/trunk/ or1200/doc/openrisc1200spec.pdf
[71] S. Chaudhuri and D. Taur, “High-resolution slow-motion sequencing: How to generate a slow-motion sequence from a bit stream,” IEEE Signal Process. Mag., vol. 22, no. 2, pp. 16–24, Feb. 2005.
[72] E. Parzen, “On Estimation of a Probability Density and Mode,” Annals of Math. Statistics, 1962.
[73] H. Kruegle, CCTV surveillance video practice and technology 2nd edition, Elsivier Butterworth-Heinemann, 2007.
[74] G. Gualdi, A. Prati, and R. Cucchiara, “Video streaming for mobile video surveillance, ” IEEE Trans. Multimedia, vol. 10, no. 6, pp. 1142–1154, Oct. 2008.
[75] Z. Kato, J. Zerubia, and M. Berthod, “Satellite image classification using a modified metropolis dynamics,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing, Mar. 1992, pp. 573–576.
[76] T. Aach, A. Kaup, R. Mester, “Change detection in image sequences using Gibbs random fields, ” Proc. Intl. Works. Intell. Sig. Proc. Comm. Syst., Sendai, Japan, Oct. 1993, pp. 56–61.
[77] Y. Chen and G. Leedham, “decompose algorithm for thresholding degraded historical document images, ” IEE Proc. vision, image and signal processing, vol. 152, pp: 702-714, 2005.
[78] Coding of Audiovisual Objects-Part 2: Visual, ISO/IEC ISO/IEC 14496-2 (MPEG-4), 2001.
[79] M.-H. Hsiao, et al., “Object-based video streaming technique with application to intelligent transportation systems,” in Proc. IEEE Int. Conf. Networking, Sensing and Control, pp. 315–318, Mar. 2004.
[80] J. Meessen, C. Parisot, X. Desurmont, and J.-F. Delaigle, “Scene Analysis for Reducing Motion JPEG 2000 Video Surveillance Delivery Bandwidth and Complexity,” in Proc. of IEEE Int. Conf. Image Process., Genova, Italy, September 2005.
[81] W. K.-H. Ho, W.-K. Cheuk, and D. P.-K. Lun, “Content-based scalable H.263 video coding for road traffic monitoring,” IEEE Trans. Multimedia, vol. 7, no. 4, pp. 615–623, Aug. 2005.
[82] A. Vetro, T. Haga, K. Sumi, and H. Sun, “Object-based coding for long-term archive of surveillance video,” in Proc. IEEE Int. Conf. Multimedia and Expo, vol. 2, pp. 417–420, July. 2003.
[83] F. Moschetti, G. Covitto, F. Ziliani, and A. Mecocci, “Automatic object extraction and dynamic bitrate allocation for second generation video coding,” in Proc. IEEE Int. Conf. Multimedia and Expo, vol. 1, pp. 493–496, July. 2002.
[84] Y. Yu and D. Doermann, “Model of object-based coding for surveillance video,” in IEEE Int. Conf. Acoust., Speech, Signal Processing, pp. 693–696, Mar. 2005.
[85] R. V. Babu and A. Makur, “Object-based surveillance video compression using foreground motion compensation,” in Proc. IEEE Int. Conf. Control, Automation, Robotics and Vision, pp. 1–6, Dec. 2006.
[86] A. Cavallaro, O. Steiger, and T. Ebrahimi, “Semantic video analysis for adaptive content delivery and automatic description,” IEEE Trans. Circuits Syst. Video Technol., vol. 15, pp. 1200–1209, Oct. 2005.
[87] T. Sikora and B. Makai, “Shape-adaptive DCT for generic coding of video,” IEEE Trans. Circuits Syst. Video Technol., vol. 5, pp. 59–62, Feb. 1995.
[88] L. P. Kondi, G. Melnikov, and A. K. Katsaggelos, “Joint optimal object shape estimation and encoding,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, pp. 528–533, Apr. 2004.
[89] K. J. Kim, C. W. Lim, M. G. Kang, and K. T. Park, “Adaptive approximation bounds for vertex based contour encoding,” IEEE Trans. Image Processing, vol. 8, pp. 1142–1147, Aug. 1999.
[90] J. Serra and L. Vincent, “An overview of morphological filtering,” IEEE Trans. Circuits, Systems and Signal Proc., vol. 11, no. 1, pp. 47–108, Apr. 1993.
[91] Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification, ITU-T Rec. H.264 and ISO/IEC 14 496-10 AVC, Joint Video Team, Mar. 2003.
[92] G. Bjontegaard, “Calculation of average PSNR differences between RD-curves,” ITU-T Q6/SG16, Doc. VCEG-M33, Apr. 2001.
指導教授 蔡宗漢(Tsung-Han Tsai) 審核日期 2012-1-10
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明