博碩士論文 93523030 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:102 、訪客IP:18.220.224.50
姓名 林明毅(Ming-Yi Lin)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 多鏡頭視訊監控系統之前景區塊偵測與位元率分配機制
(Foreground Detection and Rate Allocation in Multi-Camera Surveillance System)
相關論文
★ 基於區域權重之衛星影像超解析技術★ 延伸曝光曲線線性特性之調適性高動態範圍影像融合演算法
★ 實現於RISC架構之H.264視訊編碼複雜度控制★ 基於卷積遞迴神經網路之構音異常評估技術
★ 具有元學習分類權重轉移網路生成遮罩於少樣本圖像分割技術★ 具有注意力機制之隱式表示於影像重建 三維人體模型
★ 使用對抗式圖形神經網路之物件偵測張榮★ 基於弱監督式學習可變形模型之三維人臉重建
★ 以非監督式表徵分離學習之邊緣運算裝置低延遲樂曲中人聲轉換架構★ 基於序列至序列模型之 FMCW雷達估計人體姿勢
★ 基於多層次注意力機制之單目相機語意場景補全技術★ 基於時序卷積網路之單FMCW雷達應用於非接觸式即時生命特徵監控
★ 視訊隨選網路上的視訊訊務描述與管理★ 基於線性預測編碼及音框基頻週期同步之高品質語音變換技術
★ 基於藉語音再取樣萃取共振峰變化之聲調調整技術★ 即時細緻可調性視訊在無線區域網路下之傳輸效率最佳化研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在新一代的監控系統中,使用網路影像錄影主機(NVR)與網路攝影機(IP Camera)是一個未來發展的趨勢,而當多個視訊流一起在固定頻寬的通道上傳輸時,一個有效的位元率分配機制是必需的。在這篇論文中,我們提出一套前景區塊偵測機制(EBFBD),來找出畫面中變化的區塊,並且依照區塊的數量來決定攝影機的重要性。依此重要性我們提出一套調適性位元率分配的演算法(AQRDRA),讓重要性高的攝影機擁有較高的位元率配置,以獲得較佳的視訊品質。最後,我們開發出一套以H.264為基礎的多鏡頭視訊監控系統,讓上述之演算法可以在此平台上獲得驗證。
我們將所提之演算法在實際有效頻寬1.1Mbps下做八路攝影機的模擬實驗,結果證實所提的方法,在幾乎不影響非重點攝影機的視覺品質下,比起位元率均分法可大幅提升重點攝影機的視訊品質最高達8.7dB之多,且有助於H.264位元率控制機制更有效達成所設定之目標位元率。
摘要(英) In the new generation of video surveillance system, adopting NVR (Network Video Recorder) and IP Camera will become the future trend. When the multiple video streams are transmitted together through the fixed bandwidth channel, an efficient rate allocation mechanism is necessary. In this thesis, we develop an Edge-based Foreground Block Detection (EFBD) method to find out changing (foreground) blocks and then determine the importance of cameras based on EFBD. Accordingly, we propose an Adaptive Q-R-D Rate Allocation (AQRDRA) method to allocate higher bitrate to active cameras for better visual quality. Finally, we develop a multi-camera surveillance system using H.264 codec to implement and verify our proposed methods.
The experiments are conducted under the total available bandwidth 1.1Mbps with eight cameras. The experimental results demonstrate that the proposed scheme outperforms uniformly-distributed rate allocation. Without scarifying inactive camera too much, the proposed scheme can enhance the video quality of active camera by 8.7dB at most. Moreover, our proposed method is beneficial for the H.264 rate control scheme to achieve the target rate.
關鍵字(中) ★ 多鏡頭視訊監控
★ 位元率分配
★ 前景區塊偵測
★ H.264視訊編碼
關鍵字(英) ★ multi-camera surveillance system
★ rate allocation
★ foreground block detection
★ H.264 video coding
論文目次 第一章 緒論 1
1.1 簡介 1
1.2 動機與目的 2
1.3 多鏡頭監控系統實驗架構 3
1.4 論文架構 6
第二章 多鏡頭監控系統前處理–前景區塊偵測機制 7
2.1 研究目的 7
2.2 相關研究 8
2.3 以邊緣為基礎之前景區塊偵測 11
2.3.1 前景邊緣萃取 12
2.3.2 前景區塊萃取 15
2.3.3 背景模型更新 23
第三章 多鏡頭監控系統壓縮編碼–位元率分配機制 26
3.1 H.264視訊壓縮標準簡介 26
3.1.1 網路提取層(Network Abstraction Layer) 28
3.1.2 視訊編碼層(Video Coding Layer) 31
3.2 H.264位元率控制(Rate Control) 40
3.2.1 名詞解釋 43
3.2.2 位元率控制流程簡介 45
3.2.3 GOP層級的位元率控制 46
3.2.4 Frame層級的位元率控制 47
3.2.5 Basic Unit的位元率控制 50
3.3 多鏡頭位元率分配機制目的與相關研究 52
3.4 基於Q-R-D模型之多鏡頭下的調適性位元率分配 54
3.4.1 Q-D線性趨近線 54
3.4.2 Q-R乘冪趨近線 58
3.4.3 AQRD位元率分配機制(Adaptive Q-R-D Rate Allocation) 61
第四章 多鏡頭視訊監控系統 64
4.1 系統概觀與架構 64
4.2 系統功能簡介 66
4.3 系統實施方式 70
4.3.1 視訊擷取 70
4.3.2 輸入影像格式 77
4.3.3 使用Intel MMX技術作程式最佳化 78
4.3.4 RTP連線 87
4.4 系統實作成果 98
第五章 實驗結果分析與討論 108
5.1 環境參數設定與使用的視訊樣本 108
5.2 以邊緣為基礎之前景區塊偵測機制的結果分析與討論 111
5.3 基於Q-R-D模型之多鏡頭下的調適性位元率分配機制的結果分析與討論 118
5.4 系統效能分析比較 139
第六章 結論與未來展望 142
參考文獻 144
參考文獻 [1]W. Hu, T. Tan, L. Wang and S. Maybank, "A survey on visual surveillance of object motion and behaviors," IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 34, pp. 334-352, 2004.
[2]K. Toyama, J. Krumm, B. Brumitt and B. Meyers, "Wallflower: Principles and practice of background maintenance," in Proceedings of the 1999 7th IEEE International Conference on Computer Vision (ICCV'99), Sep 20-Sep 27 1999, 1999, pp. 255-261.
[3]S. Kamijo, Y. Matsushita, K. Ikeuchi and M. Sakauchi, "Traffic monitoring and accident detection at intersections," Proceedings IEEE Conference on Intelligent Transportation Systems, pp. 703-708, 1999.
[4]D. Gutchess, M. Trajkovic, E. Cohen-Solal, D. Lyons and A. K. Jain, "A background model initialization algorithm for video surveillance," in 8th International Conference on Computer Vision, Jul 9-12 2001, 2001, pp. 733-740.
[5]R. Cucchiara, C. Grana, M. Piccardi and A. Prati, "Detecting moving objects, ghosts, and shadows in video streams," IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, pp. 1337-1342, 2003.
[6]H. Lin, T. Liu and J. Chuang, "A probabilistic SVM approach for background scene initialization," in International Conference on Image Processing (ICIP'02), Sep 22-25 2002, 2002, pp. 893-896.
[7]A. Makarov, "Comparison of background extraction based intrusion detection algorithms," in Proceedings of the 1996 IEEE International Conference on Image Processing, ICIP'96. Part 1 (of 3), Sep 16-19 1996, 1996, pp. 521-524.
[8]E. Durucan and T. Ebrahimi, "Change detection and background extraction by linear algebra," Proceedings of the IEEE, vol. 89, pp. 1368-1381, 2001.
[9]F. Ziliani and A. Cavallaro, "Image analysis for video surveillance based on spatial regularization of a statistical model-based change detection," Real Time Imaging, vol. 7, pp. 389-399, 2001.
[10]C. Kim and J. Hwang, "Fast and automatic video object segmentation and tracking for content-based applications," IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, pp. 122-129, 2002.
[11]S. Pumrin, “A Framework for Dynamically Measuring Mean Vehicle Speed Using Un-Calibrated Cameras,” General Examination Report, Intelligent Transportation Systems Laboratory Department of Electrical Engineering University of Washington, December 12, 2001
[12]B.K.P. Horn; B.G. Schunck., “Determining optical flow,” AI Memo 572. Massachusetts Institue of Technology, 1980.
[13]Alsaqre, F.E.; Baozong, Y., "Moving object segmentation from video sequences: an edge approach," EURASIP Conference focused on Video/Image Processing and Multimedia Communications, vol. 1, pp. 193- 199, 2003.
[14]“Final committee draft: Editor’s proposed revisions,” in Joint Video Team(JVT) of ISO/IEC MPEG and ITU-T VCEG, T. Wiegand, Ed., Feb.2003,JVT-F100.
[15]“Draft ITU-T recommendation and final draft international standard of joint video specification(ITU-T Rec. H.264/ISO/IEC 14496-10 AVC,)” in Joint Video Team(JVT) of ISO/IEC MPEG and ITU-T VCEG, JVTG050,2003.
[16]T. Wiegand, G. J. Sullivan, G. Bjontegaard and A. Luthra, "Overview of the H.264/AVC video coding standard," IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 560-576, 2003.
[17]T. Wiegand, H. Schwarz, A. Joch, F. Kossentini and G. J. Sullivan, "Rate-constrained coder control and comparison of video coding standards," IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 688-703, 2003.
[18]Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG Document JVT-FO86 Dec. 2002.
[19]Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG Document JVT-G012 Mar. 2003.
[20]S. Ma, W. Gao, F. Wu and Y. Lu, "Rate control for JVT video coding scheme with HRD considerations," in Proceedings: 2003 International Conference on Image Processing, ICIP-2003, Sep 14-17 2003, 2003, pp. 793-796.
[21]X. Zhu, E. Setton and B. Girod, "Rate allocation for multi-camera surveillance over an ad hoc wireless network," in Picture Coding Symposium 2004, Dec 15-17 2004, 2004, pp. 7-12.
[22]S. Kwon, K.R. Rao, O.-J. Kwon and T.-S. Kim, "Joint Bandwidth Allocation for User-Required Picture Quality Ratio Among Multiple Video Sources," IEEE Transactions on Broadcasting, vol.51, no.3, pp. 287- 295, 2005.
[23]Z. He, J. Cai and C. W. Chen, "Joint source channel rate-distortion analysis for adaptive mode selection and rate control in wireless video coding," IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, pp. 511-523, 2002.
[24]X. M. Zhang, A. Vetro, Y. Q. Shi and H. Sun, "Constant quality constrained rate allocation for FGS video coded bitstreams," in Viual Communications and Image Processing 2002, Jan 21-23 2002, 2002, pp. 817-827.
[25]Z. He and S. K. Mitra, "A linear source model and a unified rate control algorithm for DCT video coding," IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, pp. 970-982, 2002.
[26]“Microsoft MSDN Library,” http://msdn.microsoft.com/library/default.asp.
[27]Intel, MMX Technology – Programmers Reference Manual. 2000.
[28]Intel, IA-32 Intel Architecture Software Developer’s Manual, vol.1. 2003.
[29]Intel, IA-32 Intel Architecture Software Developer’s Manual, vol.2. 2003.
[30]RTP (RFC 3550 and RFC 3551) promoted to Full Standard.
[31]“JRTPLIB,” http://research.edm.luc.ac.be/jori/jrtplib/jrtplib.html
[32]Sheng-Wang Wu, “Improved Rate Control for Low-Delay Communications in H.264/AVC Video Coding Standard,” Master thesis, Department of Computer Science and Engineering, National Sun Yat-sen University, 2004.
[33]Jen-Hung Chang, “Object Detection for Surveillance System and Vehicle Detection and Indexing for Traffic Video,” Master thesis, Department of Electrical Engineering, National Yunlin University of Science and Technology, 2004.
[34]Bing-Fu Chen, “Detection and Tracking of Multi-lane Vehicles for Intelligent Traffic Monitoring,” Master thesis, Department of Computer Science and Information Engineering, National Central University, 2002.
指導教授 張寶基(Pao-Chi Chang) 審核日期 2006-7-17
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明