博碩士論文 103522047 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:57 、訪客IP:3.16.207.48
姓名 何世偉(Shih-Wei Ho)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 整合式的全周監視與辨識系統
(Integrated Surrounding Monitor and Recognition System)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來,防盜的意識逐漸提高,各家廠商也不斷推陳出新,使得監控設備日益普及,在社區、大樓隨處可看見這些設備的身影。目前大樓的監視系統多半是透過多台攝影機安裝在各角落並使用不同角度來達到監控環境的目的,如此一來,不僅警衛要隨時注意每一個畫面的影像,且沒有統一空間的觀念,管理起來較不方便。
相較於一般視角的相機,魚眼相機的視角可高達 180 o,能拍攝到更寬廣的範圍;因此在相同的監控環境中,若使用魚眼相機可使設備數量大幅減少,進而降低系統建構及管理的成本。本研究使用魚眼相機作為主要的監控設備,提出整合式的全周監視與辨識系統;整個系統包含兩大部份:一是全周監視用於監視建築周遭的狀況,二是線上偵測前景物並分辨入侵者,若有異狀則透過物件辨識確認為入侵者後發出警示。
在全周監視部份,我們在建築的周圍各架設一台魚眼相機向下 25 o 拍攝,並在離線程序中利用所拍攝的多張不同入侵者位置,根據這些特徵點在影像上的座標求得地圖座標與影像之間的相對關係,再透過座標換算矩陣將物體的位置標示在地圖影像上,產生即時的全周監視影像。
為了避免不必要的警示,例如:車輛、貓狗經過…等,因此在本論文中加入了物件辨識系統,當偵測到前景物時,會擷取前景物的特徵做物件辨識確認為入侵者後,才會在畫面上顯示警示訊息通知使用者,藉此減少不必要的警示與人力資源的浪費。
本研究的系統使用兩部魚眼相機,並使用多段具有不同天候情況影片做實驗;在偵測方面,有 583 個樣本,平均有 96.5 % 的偵測率;在辨識方面,共有 681 個入侵者樣本,辨識正確率可達 93.8 %,誤判率為 2.53 %。錯誤辨識發生的原因是雨天的地面積水反射人影或前景物移動速度過快所造成的。
摘要(英) In recent years, as the security awareness gradually increase and various manufacturers have been introducing new products, the monitoring devices become more prevalent. No matter where you are, you can see these devices, such as communities and buildings. Currently, most of the monitoring system of the building installation through multiple cameras at each corner and use different angles to achieve the purpose of monitoring the environment, as a result, not only to pay attention to guard image per screen at any time, and there is no concept of a unified space, and the management less convenient.
The view angle of a fisheye camera is 180 degree, so it can cover a wider field of view than a normal camera. Thus, in the same surveillance environment, only a few fisheye cameras can replace many traditional cameras to survey the events; such that the cost of system construction and management are then reduced. We use the fisheye camera as our main monitoring device, and propose integrated surrounding monitor and recognition system. The proposed system is composed of two major modules: surrounding monitor and online recognition system.
In the surrounding monitor module, we mount the cameras around the building and tilt 25 degrees. According to the relationship between the image plane and the surrounding map, we can solve the homography matrix and point the intruder on the surrounding map.
In recognition system, when a foreground object is detected, we extract the foreground object’s feature to recognize whether it’s intruder. If it were an intruder, the system will show alarm message on the screen to notice the user. Through the recognition system we can reduce most of unnecessary human resources.
We conducted experiments with the proposed system on several videos. The experiments results show that the average detection rate is 96.5 percent with 583 samples, the recognition rate can up to 93.8 percent with 681 samples and the average false positive rate is 2.53 percent.
關鍵字(中) ★ 物件偵測
★ 物件辨識
★ 全周監視
關鍵字(英)
論文目次 摘要 i
Abstract ii
誌謝 iv
圖目錄 vii
表目錄 x
第一章 緒論 1
1.1 研究動機 1
1.2 系統概述 3
1.3 論文架構 4
第二章 相關研究 6
2.1 前景物偵測 6
2.2 特徵擷取 7
2.2.1 Haar-like feature 7
2.2.2 HOG feature 8
2.3 物件辨識 10
2.3.1 AdaBoost 簡介 10
2.3.2 樣板比對 12
2.4 多相機追蹤技術 15
第三章 前景物偵測 19
3.1 codebook 背景模型 19
3.1.1 codebook背景模型的介紹 19
3.1.2 色彩與亮度的計算 20
3.1.3 codebook 背景模型的初始化與建立 22
3.1.4 前景偵測 24
3.2 ViBe背景模型 24
3.2.1 ViBe背景模型的介紹 25
3.2.2 ViBe 模型的初始化 26
3.2.3 ViBe 模型的更新方式 26
3.3 多重群聚區塊中的重疊區塊處理 27
3.4 各種背景模型之比較 29
第四章 物件辨識 31
4.1 HOG 特徵 31
4.2 SVM 分類 34
第五章 全周地圖座標換算 37
5.1 座標換算矩陣 37
5.2 相機內外部參數求解座標換算矩陣 38
5.3 特徵點對應求解座標換算矩陣 39
第六章 實驗 41
6.1 實驗環境 41
6.2 物件偵測實驗結果分析 43
6.3 物件辨識實驗結果分析 47
6.4 全周地圖影像 53
第七章 結論與未來展望 56
參考文獻 57
參考文獻 [1] An, T. K. and M. H. Kim, "A new diverse adaboost classifier," in Proc. of Int. Conf. on Artificial Intelligence and Computational Intelligence (AICI), Sanya, Oct.23-24, 2010, pp.359-363.
[2] Barnich, O. and M. Van Droogenbroeck, "ViBe: A universal background subtraction algorithm for video sequences," IEEE Trans. on Image Processing, vol.20, no.6, pp.1709-1724, 2011.
[3] Bertozzi, M., A. Broggi, M. D. Rose, M. Felisa, A. Rakotomamonjy, and F. Suard, "A pedestrian detector using histograms of oriented gradients and a support vector machine classifier," in Proc. of 10th Int. IEEE Conf. on Intelligent Transportation Systems (ITSC), Seattle, WA, Sept.30-Oct.3, 2007, pp.143-148.
[4] Cao, X. B., H. Qiao, and J. Keane, "A low-cost pedestrian-detection system with a single optical camera," IEEE Trans. on Intelligent Transportation Systems, vol.9, no.1, pp.58-67, 2008.
[5] Carroll, R., M. Agrawal, and A. Agarwala, "Optimizing content-preserving projections for wide-angle images," ACM Trans. on Graphics, vol.28, no.3, 2009.
[6] Chen, C. H., Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, "Camera handoff with adaptive resource management for multi-camera multi-object tracking," Image and Vision Computing, vol.28, no.6, pp.851-864, 2010.
[7] Chen, X., L. An, and B. Bhanu, "Multitarget tracking in nonoverlapping cameras using a reference set," IEEE Sensors Journal, vol.15, no.5, pp.2692-2704, 2015.
[8] Choi, M. J., A. Torralba, and A. S. Willsky, "A tree-based context model for object recognition," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.34, no.2, pp.240-252, 2012.
[9] Chu, C. T., J. N. Hwang, J. Y. Yu, and K. Z. Lee, "Tracking across nonoverlapping cameras based on the unsupervised learning of camera link models," in Proc. of 6th Int. Conf. on Distributed Smart Cameras (ICDSC), Hong Kong, China, Oct.30-Nov.2, 2012, pp.1-6.
[10] Dalal, N. and B. Triggs, "Histograms of oriented gradients for human detection," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, Jun.20-25, 2005, pp.886-893.
[11] Eshel, R. and Y. Moses, "Homography based multiple camera detection and tracking of people in a dense crowd," in Proc. of 26th IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Anchorage, Alaska, Jun.23-28, 2008, pp.1-8.
[12] Gennery, D. B., "Generalized camera calibration including fish-eye lenses," Int. Journal of Computer Vision, vol.68, no.3, pp.239-266, 2006.
[13] Gheissari, N., T. B. Sebastian, P. H. Tu, J. Rittscher, and R. Hartley, "Person reidentification using spatiotemporal appearance," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), New York, NY, Jun.17-22, 2006, pp.1528-1535.
[14] Gutchess, D., M. Trajković, E. Cohen-Solal, D. Lyons, and A. K. Jain, "A background model initialization algorithm for video surveillance," in Proc. of 8th Int. Conf. on Computer Vision, Vancouver, BC, Canada, Nov.6-13, 2001, pp.733-740.
[15] Hamid, R., R. K. Kumar, M. Grundmann, K. Kim, I. Essa, and J. Hodgins, "Player localization using multiple static cameras for sports visualization," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, Jun.13-18, 2010, pp.731-738.
[16] Huang, S. C., "An advanced motion detection algorithm with video quality analysis for video surveillance systems," IEEE Trans. on Circuits and Systems for Video Technology, vol.21, no.1, pp.1-14, 2011.
[17] Huang, T. and S. Russell, "Object identification in a bayesian context," in Proc. of 15th Int. Joint Conf. on Artificial Intelligence (IJCAI), Nagoya, Aichi, Japan, Aug.23-29, 1997, pp.1276-1282.
[18] Hughes, C., M. Glavin, E. Jones, and P. Denny, "Review of geometric distortion compensation in fish-eye cameras," in Proc. of IET Irish Signals and Systems Conf. (ISSC), Galway, Ireland, Jun.18-19, 2008, pp.162-167.
[19] Hughes, C., M. Glavin, and E. Jones, "Simple fish-eye calibration method with accuracy evaluation," Electronic Letters on Computer Vision and Image Analysis, vol.10, no.1, pp.54-62, 2011.
[20] Javed, O., Z. Rasheed, K. Shafique, and M. Shah, "Tracking across multiple cameras with disjoint views," in Proc. of Ninth IEEE Int. Conf. on Computer Vision, Nice, France, Oct.13-16, 2003, pp.952-957.
[21] Javed, O., K. Shafique, Z. Rasheed, and M. Shah, "Modeling inter-camera space-time and appearance relationships for tracking across non-overlapping views," Computer Vision and Image Understanding, vol.109, no.2, pp.146-162, 2008.
[22] Jiang, H., S. Fels, and J. J. Little, "A linear programming approach for multiple object tracking," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Minneapolis, MN, Jun.17-22, 2007, pp.17-22.
[23] Khan, S., O. Javed, Z. Rasheed, and M. Shah, "Human tracking in multiple cameras," in Proc. of Int. Conf. on Computer Vision, Vancouver, BC,Canada, Jul.7-14, 2001, pp.331-336.
[24] Khan, S. and M. Shah, "Consistent labeling of tracked objects in multiple cameras with overlapping fields of view," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.25, no.10, pp.1355-1360, 2003.
[25] Kim, K., T. H. Chalidabhongse, D. Harwood, and L. Davis, "Real-time foreground-background segmentation using codebook model," Real Time Imaging, vol.11, no.3, pp.172-185, 2005.
[26] Kuo, C. H., C. Huang, and R. Nevatia, "Inter-camera association of multi-target tracks by on-line learned appearance affinity models," in Proc. of 11th European Conf. on Computer Vision (ECCV), Heraklion, Crete, Greece, Sep.5-11, 2010, pp.383-396.
[27] Lee, P. H., Y. L. Lin, S. C. Chen, C. H. Wu, C. C. Tsai, and Y. P. Hung, "Viewpoint-independent object detection based on two-dimensional contours and three-dimensional sizes," IEEE Trans. on Intelligent Transportation Systems, vol.12, no.4, pp.1599-1608, 2011.
[28] Liem, M. and D. M. Gavrila, "Multi-person tracking with overlapping cameras in complex, dynamic environments," in Proc. of 20th British Machine Vision Conf. (BMVC), London, UK, Sep.7-10, 2009.
[29] Liu, X., L. Lin, S. Yan, H. Jin, and W. Tao, "Integrating spatio-temporal context with multiview representation for object recognition in visual surveillance," IEEE Trans. on Circuits and Systems for Video Technology, vol.21, no.4, pp.393-407, 2011.
[30] Makris, D., T. Ellis, and J. Black, "Bridging the gaps between cameras," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Washington, DC, Jun.27-Jul.2, 2004, pp.II205-II210.
[31] Marinakis, D., G. Dudek, and D. J. Fleet, "Learning sensor network topology through monte marlo expectation maximization," in Proc. of IEEE Int. Conf. on Robotics and Automation, Barcelona, Spain, Aug.18-22, 2005, pp.4581-4587.
[32] Morariu, V. I. and O. I. Camps, "Modeling correspondences for multi-camera tracking using nonlinear manifold learning and target dynamics," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), New York, NY, Jun.17-22, 2006, pp.545-552.
[33] Mundhenk, T. N., M. J. Rivett, X. Liao, and E. L. Hall, "Techniques for fisheye lens calibration using a minimal number of measurements," in Proc. of Intelligent Robots and Computer Vision XXI: Algorithms, Techniques, and Active Vision, Boston, MA, Nov.7-8, 2000, pp.181-190.
[34] Papageorgiou, C. and T. Poggio, "Trainable system for object detection," Int. Journal of Computer Vision, vol.38, no.1, pp.15-33, 2000.
[35] Pasula, H., S. Russell, M. Ostland, and Y. Ritov, "Tracking many objects with many sensors," in Proc. of 16th Int. Joint Conf. on Artificial Intelligence (IJCAI), Stockholm, Sweden, Jul.31-Aug.6, 1999, pp.1160-1167.
[36] Rahimi, A., B. Dunagan, and T. Darrell, "Simultaneous calibration and tracking with a network of non-overlapping sensors," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Washington, DC, Jun27-Jul.2, 2004, pp.I187-I194.
[37] Shao, J., N. Dong, F. Liu, and Z. Li, "A close-loop tracking approach for multi-view pedestrian tracking," Journal of Computational Information Systems, vol.7, no.2, pp.539-547, 2011.
[38] Song, B. and A. K. Roy-Chowdhury, "Robust tracking in a camera network: a multi-objective optimization framework," IEEE Journal on Selected Topics in Signal Processing, vol.2, no.4, pp.582-596, 2008.
[39] Stauffer, C. and W. E. L. Grimson, "Adaptive background mixture models for real-time tracking," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Fort Collins, CO, Jun.23-25, 1999, pp.246-252.
[40] Suard, F., A. Rakotomamonjy, A. Bensrhair, and A. Broggi, "Pedestrian detection using infrared images and histograms of oriented gradients," in Proc. of IEEE Intelligent Vehicles Symp., Meguro-Ku, Tokyo, Jun.13-15, 2006, pp.206-212.
[41] Tieu, K., G. Dalley, and W. E. L. Grimson, "Inference of non-overlapping camera network topology by measuring statistical dependence," in Proc. of 10th IEEE Int. Conf. on Computer Vision (ICCV), Beijing, China, Oct.17-21, 2005, pp.1842-1849.
[42] Wang, X., "Intelligent multi-camera video surveillance: a review," Pattern Recognition Letters, vol.34, no.1, pp.3-19, 2013.
[43] Yilmaz, A., O. Javed, and M. Shah, "Object tracking: a survey," ACM Computing Surveys, vol.38, no.4, 2006.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2016-8-8
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明