博碩士論文 985202071 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:44 、訪客IP:18.221.27.56
姓名 吳亞倫(Ya-lun Wu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 多攝影機協同物件追蹤的智慧型視訊監控
(Multi-camera Cooperative Object Tracking for Intelligent Video Surveillance)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在視訊監控、機器人視覺的應用,物件偵測與追蹤扮演很重要的角色,如何建立一個穩定的監控平台一直是大家持續研究的目標。然而這些監控設備常礙於硬體本身的限制,通常都會有監控範圍不夠廣泛,或產生死角等等的問題;為了改善此情形,某些監控設備會使用超廣角鏡頭、使用數位PTZ攝影機來改善視角問題、或是使用多攝影機架構,而提出這些方法都是為了要增加增廣監控的範圍,使其視域變廣達到全面監控的效果。
  本論文針對於多攝影機視訊監控應用,提出一個強健且高效率的多攝影機協同追蹤的方法,比起單一攝影機系統的監控,可以達到更全面的監控範圍;擴大了監控區域、也增加了監控的可視角。我們先利用漸進式背景建模,標定監控重疊區域後,在利用連通物件方法分析取得前景物後,以PSO演算法進行適應性的追蹤加上錯誤校正機制,而當追蹤物件進入重疊區域即將離開當下攝影畫面時,多攝影機間透過一個換手協議來決定轉移追蹤權,以達到可靠、強健的多攝影機連續追蹤監控。我們同時提出一個物件追蹤的性能評估方法,用以評估本研究所設計的多攝影機視訊監控系統性能。
  現今研究的多攝影機追蹤方法,多著重於重疊區物體深度資訊的取得,本方法則朝向增廣其監控範圍,此外也不需要複雜的場景參數設置,即可實現多攝影機連續物件追蹤的視訊監控應用。
摘要(英) In the application of security surveillance and robot vision, object detection and tracking always play an important role. How to establish a stable intelligence surveillance platform is a final purpose we all want to accomplish. But we usually face the problems of the narrow angle of view and dead space in single camera. So we may use some wide-angle lens, PTZ (pan-tilt-zoom) camera, or even multi-camera system to improve this kind of issue.
  In multi-camera surveillance system, we proposed a robust and efficient mechanism to solve the foregoing problem. At first we use the progress background modeling and calibrate the demarcation (overlapped area) between the cameras, and the connected-component to extract the interest object of the foreground, and then put a PSO tracker on that. When this tracking object is about to leave the current camera, we design a cooperative protocol to take over the tracking token and go on tracking the object in another camera. The cooperation between cameras can achieve reliable and robust in continuous tracking. Finally we proposed a NGT measure to evaluate the tracking performance.
  Compares to the other multi-camera approaches, most of them prefer to put the cameras toward the overlapped area, in order to get the depth information as a criterion. We tend to get more surveillance view instead of the overlapped area. And with the uncomplicated setup of the camera calibration, we can realize the intelligent surveillance for multi-camera cooperative object tracking.
關鍵字(中) ★ 多攝影機協同物件追蹤
★ 物件追蹤
★ 智慧型監控
關鍵字(英) ★ object tracking
★ multi-camera object tracking
★ intelligent video surveillance
論文目次 摘 要 I
Abstract II
謝 誌 III
目 錄 IV
圖目錄 VI
表目錄 IX
第一章 緒論 1
1.1 研究動機 1
1.2 文獻探討 2
1.2.1 物件偵測 3
1.2.2 物件追蹤 9
1.2.3 多攝影機目標物追蹤 12
1.3 系統架構 14
1.4 論文架構 15
第二章 物件偵測 16
2.1背景模型 17
2.2影像前處理 22
2.2.1移動物偵測 22
2.2.2形態學影像處理 24
2.3移動物分割 28
2.3.1連通元件 28
2.3.2等分區塊分割 30
第三章 物件追蹤 31
3.1搜尋空間 32
3.2追蹤方法 34
3.2.1 PSO-based粒子群體最佳化追蹤 34
3.2.2 特徵比對 38
3.3追蹤錯誤校正 42
第四章 多攝影機目標物追蹤 45
4.1多攝影機的設置 45
4.2多攝影機協同追蹤 49
第五章 系統實作及實驗結果 51
5.1實驗環境 51
5.2評估方法 52
5.3實驗結果 54
5.4結果討論 66
第六章 結論與未來研究方向 67
6.1結論 67
6.2未來研究方向 69
參考文獻 70
參考文獻 [1] A. Yilmaz, O. Javed, and M. Shah, “Object Tracking: A Survey,” In ACM Comput. Surv. 38, 4, Article 13, Dec. 2006.
[2] R. Basri and D. Jacobs, “Recognition Using Region Correspondences,” In International Journal of Computer Vision, 25, 2, pp. 141–162, 1996.
[3] D. G. Lowe, “Object Recognition from Local Scale-Invariant Feature,” In International Conference on Computer Vision, 60, 2, pp.91–110, 2004.
[4] J. Y. Kuo, “The color recognition of objects of survey and implementation on real-time video surveillance,” In IEEE International Conference on System Man and Cybernetics (SMC), 3741–3748, 10–13 Oct. 2010.
[5] M. S. Nagmode, “Moving Object Detection from Image Sequence in Context with Multimedia Processing,” In IET International Conference on Wireless, Mobile and Multimedia Networks, 259–262, 2008.
[6] J. H. Cho and S. D. Kim, “Object detection using spatio-temporal thresholding in image sequences,” In Electronics Letters, 1109–1110, 2 Sept. 2004.
[7] H. Moravec, “Visual mapping by a robot rover,” In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). 598–600, 1979.
[8] C. Harris and M. Stephens, “A combined corner and edge detector,” In 4th Alvey Vision Conference. 147–151, 1988.
[9] J. Shi and C. Tomasi, “Good features to track,” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 593–600, 1994.
[10] D. Lowe, “Distinctive image features from scale-invariant keypoints,” In Int. J. Comput. Vision 60, 2, 91–110, 2004.
[11] D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” In IEEE Trans. Patt. Analy. Mach. Intell. 24, 5, 603–619, 2002.
[12] J. Shi and J. Malik, “Normalized cuts and image segmentation,” In IEEE Trans. Patt. Analy. Mach. Intell. 22, 8, 888–905, 2000.
[13] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” In Int. J. Comput. Vision 1, 321–332, 1988.
[14] R. Jain and H. Nagel, “On the analysis of accumulative difference pictures from image sequences of real world scenes,” In IEEE Trans. Patt. Analy. Mach. Intell. 1, 2, 206–214, 1979.
[15] C. Wren, A. Azarbayejani, and A. Pentland, “Pfinder: Real-time tracking of the human body,” In IEEE International Conference on Image Processing (ICIP). 277–280, 1997.
[16] C. Stauffer and W. Grimson, “Learning patterns of activity using real time tracking,” In IEEE Trans. Patt. Analy. Mach. Intell. 22, 8, 747–767, 2000.
[17] A. Elgammal, D. Harwood, and L. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proceedings of IEEE 90, 7, 1151-1169, 2002.
[18] L. Li and Maylor K.H. Leung, “Integrating intensity and texture differences for robust change detection,” In IEEE Trans Image Process. 11, 2, 105-112, 2002.
[19] N. Oliver, B. Rosario, and A. Pentland, “A Bayesian computer vision system for modeling human interactions,” In IEEE Trans. Patt. Analy. Mach. Intell. 22, 8, 831-843, 2000.
[20] J. Zhong and S. Sclaroff, “Segmenting foreground objects from a dynamic textured background via a robust kalman filter,” In IEEE International Conference on Computer Vision (ICCV). 44–50, 2003.
[21] H. Rowley, S. Balujia,and T. Kanade, “Neural network-based face detection,” In IEEE Trans. Patt. Analy. Mach. Intell. 20, 1, 23–38, 1998.
[22] P. Viola, M. Jones, and D. Snow, “Detecting pedestrians using patterns of motion and appearance,” In IEEE International Conference on Computer Vision (ICCV). 734–741, 2003.
[23] C. Papageorgiou, M. Oren, and T. Poggio, “A general framework for object detection,” In IEEE International Conference on Computer Vision (ICCV). 555–562, 1998.
[24] H. Tanizaki, ”Non-gaussian state-space modeling of nonstationary time series,” In J. Amer. Statist. Assoc. 82, 1032–1063, 1987.
[25] D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” In IEEE Trans. Patt. Analy. Mach. Intell. 25, 564–575, 2003.
[26] B.D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” In International Joint Conference on Artificial Intelligence. 1981.
[27] D. Huttenlocher, J. Noh, and W. Rucklidge, “Tracking nonrigid objects in complex scenes,” In IEEE International Conference on Computer Vision (ICCV). 93–101, 1993.
[28] Y. Chen, Y. Rui, and T. Huang, “Jpdaf based hmm for real-time contour tracking,” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 543–550, 2001.
[29] A. Yilmaz, X. Li, and M. Shan, “Contour based object tracking with occlusion handling in video acquired using mobile cameras,” In IEEE Trans. Patt. Analy. Mach. Intell. 26, 11, 1531–1536, 2004.
[30] C. Stauffer and K. Tieu, “Automated multi-camera planar tracking correspondence modeling,” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 259, 2003.
[31] R. Collins, A. Lipton, H. Fujiyoshi, and T. K, “Algorithms for Multisensor Surveillance,” Proceedings of the IEEE, 89(10): 1456–1477, Oct. 2001.
[32] Q. Cai and J. K. Aggarwal, “Tracking Human Motion in Structured Environments Using a Distributed-camera System,” In IEEE Trans. PAMI, 24(11): 1241–1247, Nov. 1999.
[33] S. Khan and M. Shah, “Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View,” In IEEE Trans. PAMI, 25(10): 1355–1360, Oct. 2003.
[34] J. Kang, I. Cohen, and G. Medioni, “Continuous Tracking Within and Across Camera Streams,” In IEEE Conference on CVPR. 267–272, 2003.
[35] Y. C. Chung, J. M. Wang, and S. W. Chen, “Progressive Background Image Generation,” In Proc. of 15th IPPR CONF. ON Computer Vision, Graphics and Image Processing, pp. 858–865, 2002.
[36] J. Kennedy and R. Eberhart, “Particle swarm optimization,” In Proc. of IEEE International Conference on Neural Networks, Volume 4, pp. 1942–1948, 27 Nov.–1 Dec. 1995.
[37] B. M. Mehtre, M. S. Kankanhalli, A. D. Narasimhalu, and G. C. Man, “Color matching for image retrieval,” In Pattern Recognition Letters, pp. 325–331, 1995.
[38] H. Wu and Q. Zheng, “Self-evaluation of visual tracking systems,” In Proc. of ASC, Orlando (FL, USA), 29 Nov. –2 Dec. 2004.
[39] R. Liu, S. Li, X. Yuan, and R. He, “Online Determination of Track Loss Using Template Inverse Matching,” In Proc. of VS2008, Marseille (France), 17 Oct. 2008.
[40] N. Vaswani, “Additive change detection in nonlinear systems with unknown change parameters,” In IEEE Transactions on Signal Processing, 55(3):859–872, 2007.
[41] V. Badrinarayanan, P. Perez, F. Le Clerc, and L. Oisel, “Probabilistic Color and Adaptive Multi-Feature Tracking with Dynamically Switched Priority Between Cues,” In Proc. of ICCV’07, Rio de Janeiro (Brasil), 14–21 Oct. 2007.
[42] O. Javed, Z. Rasheed, K. Shafique, and M. Shan, “Tracking Across Multiple Cameras With Disjoint Views,” In IEEE International Conference on Computer Vision (ICCV). 952–957, 13–16 Oct. 2003.
[43] Y. Lu, “Intelligent Cooperative Tracking in Multi-camera Systems,” In Intelligent Systems Design and Application, 608–613, Nov. 30 2009–Dec. 2 2009.
[44] H. A. A. El-Halym, I. I. Mahmoud, A. AbdelTawab, and S. E. -D. Habib, “Particle Filter versus Particle Swarm Optimization for Object Tracking,” In 13th International Conference on Aerospace Sciences & Aviation Technology (ASAT), ASAT–13–RS–04, 26–28 May. 2009.
[45] B. Kwolek, “Multi Camera-based Person Tracking Using Region Covariance and Homography Constraint,” In 7th Advanced Video and Signal Based Surveillance (AVSS), 294–299, Aug.29 2010–Sept. 1 2010.
[46] C. Micheloni, G. L. Foresti and L. Snidaro, “A Cooperative Multicamera System For Video-surveillance of Parking Lots,” In IEE Symposium on Intelligence Distributed Surveillance Systems, pp.1–5, 26 Feb. 2003.
指導教授 陳慶瀚(Ching-han Chen) 審核日期 2012-6-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明