博碩士論文 995402006 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:25 、訪客IP:3.137.183.14
姓名 陳建宏(Chien-Hung Chen)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 不同應用情境的自動偵測與追蹤技術研究
(The study on automatic detection and tracking techniques for various application scenarios)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-1-1以後開放)
摘要(中) 自動化安全監視系統需要被發展,不僅可以節省人力成本,而且對於安全環境的監控是一項重大提升。我們將討論三個基於攝影機在各種環境中的監視應用議題:異常人體姿態偵測、多相機聯合自動偵測與追蹤、及多旋翼機自主追蹤。以下就這三個議題各別闡述。
在居家環境中,老年或行動不便人員可能因疾病或功能障礙而在移動中跌倒,蹲下或倒坐在地面上。如果無法及時發現,則可能導致致命危險。為了減少致命事件,提出了針對老年或行動不便人員的環境監測系統研究。本研究提出的系統,考量了四種異常行為的偵測:(1) 跌倒與蹲下;(2) 未經允許出入房間;(3) 未經允許離開指定區域;及 (4) 記錄步行軌跡。系統使用全方位攝影機 (omni-directional camera) 進行監視及影像擷取。首先應用背景減法 (background subtraction) 偵測前景物,然後使用主成份分析法 (principal component analysis, PCA) 為基準及以梯形區域高度變化為基準的偵測方法來偵測跌倒和蹲伏姿勢。在所提出的方法基礎上,進行了多種環境下的實驗與評估。由實驗結果顯示,以主成份分析為基準及以梯形區域高度變化為基準的人員跌倒與蹲下偵測率分別為93% 與92%;病人擅離病房的偵測率為95%;而病人擅離病床偵測率為95%;系統在實驗結果皆能獲得穩定的偵測結果,證明了該系統的可行性。
我們所提的異常人體姿態偵測系統具有下列特色:(1) 使用全方位攝影機,能拍攝到360度的環場影像。(2) 使用主成份分析法偵測人體主軸的方向與長度。(3) 使用人體梯形區域高度變化判斷人員狀態。
單鏡頭攝影機因受到鏡頭結構的限制,其監看範圍是有限的。多魚眼鏡頭攝影機可以監視大範圍並追蹤移動物體的完整軌跡。本研究使用兩部魚眼攝影機,提出一個自動偵測及追蹤的環境監視系統 (surveillance system);提出的系統由兩個主要模組組成:前景物偵測與前景物追蹤。首先應用背景相減法 (background subtraction) 偵測前景物。在前景物追蹤,使用卡爾曼濾波器 (Kalman filter) 預測前景物的位置,在相機重疊畫面預先建立轉換表,當前景物跨越相機監視範圍時,利用轉換表能了解不同攝影機追蹤的物件實際是同一前景物,並維持物件編號聯合追蹤。為了要增加比對的可靠性,加入前景物的外型特徵輔助判斷;例如,顏色。在系統運作時兩部攝影機皆可實現多目標的追蹤,並且持有各自的追蹤器,當物件發生短暫遮蔽仍持續預測位置並提供平順的移動軌跡。由實驗結果顯示,本研究使用兩部魚眼相機,並透過具有不同光線亮度和人物數量的多部影片做實驗,平均敏感度為96.7 %,誤判率為0.45%,判斷錯誤發生的原因是前景物的衣物和背景色彩很相似,當系統加入Kalman filter追蹤輔助後可以使敏感度提升為98.55 %。實驗結果證明系統能適應許多室內不同監視狀況,如光線變化、陰影干擾、遮蔽,皆能有效且穩定的聯合前景偵測和追蹤系統。
我們所提的多相機聯合自動偵測與追蹤系統具有下列特色:(1) 可適應光線變化和雜訊的前景偵測。(2) H-S色彩分佈圖比對及距離相似度比對。(3) 跨相機座標轉換,可連續追蹤目標物。
多軸旋翼機發展出被動式追蹤,也就是在被追蹤目標上放一個被追蹤裝置,多軸飛行器透過該裝置的訊號來跟蹤,但追蹤非特定目標時就無法事先放置被追蹤裝置在目標上,將造成追蹤失敗。本研究提出多軸旋翼機電腦視覺追蹤系統,透過電腦視覺達到追蹤任意目標而不需額外被追蹤裝置的目的。首先利用尺度候選圖改進KCF (kernelized correlation filters),使KCF可以適應目標變化大小,接著使用基於特徵匹配的偵測演算法讓目標丟失時也能重新找回。在所提出的方法基礎上,進行了多種環境下的實驗與評估。本研究使用縮放和遮蔽兩種影片場景,做為測試演算法對於縮放與遮蔽場景的追蹤能力判斷;除此之外,我們利用追蹤率、重疊準確率和執行速度做為評估依據,首先使用人工方式框出目標在影像中的位置做為成功追蹤的判定依據,接著利用所框出的位置計算演算法的追蹤率、重疊準確率和執行速度。由實驗結果顯示,我們提出的演算法執行速度為26 fps、在縮放的案例中追蹤率為88%、在遮蔽的案例中追蹤率為 98%、重疊準確度為 87%,在速度與準確率的取捨中,我們犧牲了演算法執行速度來換取更穩健的追蹤。系統在實驗結果皆能獲得穩定的偵測結果,證明了該系統的可行性。
我們所提的多旋翼機自主追蹤系統具有下列特色:(1) 透過電腦視覺達到追蹤任意目標,不需額外被追蹤裝置。(2) 透過尺度候選圖改進KCF,使KCF可以適應目標大小變化。(3) 當目標遮蔽或目標消失時,利用基於特徵匹配演算法重新找回目標。
摘要(英) Automated security surveillance systems need to develop, not only can save money on labor costs, but also for monitoring by the security environment is a major upgrade. In this dissertation, we discuss three application issues of camera-based surveillance in various environments: abnormal posture detection, automatic detection and tracking and multicopter autonomous tracking. These three research issues are respectively discussed as follows.
In various general environments, the aged or disabled people may fall, squat, or sit down on ground due to sick or dysfunction in move. If the events are unable discovered in time, the fatal danger may be then caused. To reduce the fatal events, an environmental monitoring system for the aged or disabled people is proposed. In this study, four kinds of abnormal behavior detection are considered in the proposed system: (i) fall down and crouch, (ii) going out a room without permission, (iii) leaving specified area without permission, and (iv) recording walk trajectory. An omni-directional camera is used to capture images for monitoring. The background subtraction method is first applied to extract targets. Then principal component analysis (PCA) and the change of height of a personal trapezoidal bounding box are used to detect fall down and crouch postures. Based on the proposed method, the experiments and evaluations in a variety of environments have been carried out. The experiments results show that the PCA method can perform sixty frames per second, and the height of the patient trapezoidal bounding box method can perform eighty frames per second. The detection rate of the fall down and crouch detection using PCA is 93%, and the detection rate of that using the height of the patient trapezoidal bounding box is 92%; the detection rate of leaving ward is 95%; and the detection rate of leaving bed is 95%. Stable detection results were obtained to show the feasibility of the proposed system.
The proposed abnormal posture detection method has the following properties: (i) using an omni-directional camera can easily take 360-degree surround images, (ii) using principal component analysis (PCA) to detect the main direction and length of the personal body, (iii) using the change of the height of the patient trapezoidal bounding box and then determine the personal situation.
The view scope of a single camera is finite and limited by scene structures. Multi-fisheye cameras can monitor a wide area and trace a complete trajectory of a moving object. In this study, an automatic detection and tracking system with two fisheye cameras for environment surveillance is proposed. The proposed system is composed of two major modules: foreground detection and foreground tracking. The background subtraction method is first applied to extract targets. Then use Kalman filtering for pedestrian motion prediction. A transform table is pre-established to associate multi-cameras data in the overlapping areas. When object across disjoint camera views, the data in the lookup table can provide enough information to realize the moving object in camera views actually belonging to the same object, and keep consistent labels on the object. To improve the reliability of the tracking performance, motion and color appearance features are used to match the detected objects in different cameras. The experiments results show that the average sensitivity is 96.7% and the average false positive rate is 0.45% because the foreground objects are similar to the background. The average sensitivity rises to 98.55% with the Kalman filter. It demonstrates that the proposed method can work well under challenging conditions, such as light change, shadow interference, object occlusion.
The proposed automatic detection and tracking method has the following properties: (i) proposed foreground detection and tracking method can adapt to brightness changes and noise, (ii) using H-S color distribution map comparison and distance similarity comparison to detection, (iii) using the camera overlapping features, multi-cameras coordinate conversion, continuous tracking of objects.
A multicopter is equipped by a passive tracking device to follow a specified target. However, if want to track a non-controlled target, the passive tracking device is failed. We propose a vision-based tracking system for multicopters, system used computer vision method to track any target without additional tracking devices. In this study, propose scale candidate graphs and scale tables to improve kernelized correlation filters (KCF). There are also stable results when the scale changes. The proposed an adaptable scaled KCF algorithm, when the KCF tracking failed, a feature-based matching detector is used to re-detect the target. In experiments, there are two cases of scaling and occluded targets in the videos to be evaluated. The experimental results reveal that the proposal method can adapt the change of scale and occlusion of targets. In addition to the above ability, the tracking rate, accuracy, and execution speed are also evaluated. The experiments results show that the proposed algorithm performs speed to 26 fps, tracking rate in the scaled cases can reach 88%, tracking rate in occluded cases can reach 98%, and overlap rate can reach 87%. With trade-off between speed and accuracy, we sacrifice the execution speed of the algorithm to exchange for more robust tracking. Several experiments on various scene based on the proposed approach were conducted and evaluated.
The proposed an adaptable scaled KCF method has the following properties: (i) allows multi-axis aircraft to have the ability to track non-specific targets through computer vision tracking, (ii) using adaptable scaled KCF can adapt to the target change size and get the correct target image, (iii) using matching points to find the target position and scale to successfully retrieve the target.
關鍵字(中) ★ 自動偵測與追蹤
★ 監控系統
★ 電腦視覺
★ 影像處理
★ 異常行為偵測
★ 多相機追蹤
★ 尺度適應
關鍵字(英) ★ automatic detection and tracking
★ surveillance system
★ computer vision
★ image processing
★ abnormal behavior detection
★ multi-cameras tracking
★ adaptable scaled
論文目次 摘要 i
Abstract iv
誌謝 vii
Contents viii
List of Figures xi
List of Tables xvii
Chapter 1 Introduction 1
1.1. Motivation 1
1.2. Overview of this study 2
1.2.1. Abnormal posture detection in an omni-directional camera based surveillance 2
1.2.2. Automatic detection and tracking in multi-fisheye cameras surveillance 4
1.2.3. Autonomous tracking by an adaptable scaled KCF 4
1.3. Organization of this dissertation 5
Chapter 2 Related works 6
2.1. Research in surveillance system 6
2.1.1. Motion detection 6
2.1.2. Background subtraction 7
2.1.3. Shadow detection 8
2.1.4. Multi-cameras tracking model 8
2.2. Research in tracking system for multicopters 10
2.2.1. Generate reaction graphs 11
2.2.2. Feature extraction 12
2.2.3. Target matching 14
Chapter 3 Abnormal posture detection in an omni-directional-camera-based surveillance 16
3.1. Structure and features of omni-camera 16
3.1.1. Structure and imaging principle 16
3.1.2. Panoramic image conversion 19
3.2. The proposed object detector 20
3.2.1. Background subtraction of motion detection 20
3.2.2. PCA of object orientation detection 25
3.2.3. The change of the height of the patient trapezoidal bounding box as a benchmark to do detection objects shape 26
3.3. The proposed abnormal posture detector 29
3.3.1. Patient fall down and crouch 29
3.3.2. Going out ward without permission 31
3.3.3. Patient leaving bed without permission 32
3.3.4. Recording walk trajectory 33
3.4. Experiments 34
3.4.1. Patient fall down and crouch 34
3.4.2. Going out ward without permission 40
3.4.3. Patient leaving bed without permission 41
3.4.4. Recording walk trajectory 42
Chapter 4 Automatic detection and tracking in multi-fisheye cameras surveillance 43
4.1. Foreground detection 43
4.1.1. Background modeling 43
4.1.2. Background subtraction method to detect foreground 47
4.1.3. Foreground process 48
4.1.4. Background upgrade 50
4.1.5. Features capture of foreground 51
4.2. Foreground tracking 52
4.2.1. Features matching 52
4.2.2. System parameters setup of Kalman filter 55
4.2.3. Multi-cameras tracking 56
4.3. Experiments 58
4.3.1. Foreground detection experimental results 59
4.3.2. Foreground tracking experimental results 61
Chapter 5 Autonomous tracking by an adaptable scaled KCF algorithm 66
5.1. The proposed method 66
5.1.1. Correlation filter 66
5.1.2. Generate candidate graphs 68
5.1.3. Re-detection of target disappearance 72
5.1.4. Feature matching 72
5.1.5. Target position and scale re-detection 74
5.2. Experiments 75
Chapter 6 Conclusions 82
6.1. Abnormal posture detection in an omni-directional-camera-based surveillance 82
6.2. Automatic detection and tracking in multi-fisheye cameras surveillance 83
6.3. Autonomous tracking by an adaptable scaled KCF algorithm 86
References 88
參考文獻 [1] M. U. Ashraf, A. Hannan, S. M. Cheema, Z. Ali, K. M. Jambi and A. Alofi, "Detection and tracking contagion using IoT-edge technologies: confronting COVID-19 pandemic," in Proc. IEEE Conf. on Electrical, Communication, and Computer Engineering, Istanbul, Turkey, Jun.12-13, 2020, pp.1-6.
[2] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, "High-speed tracking with kernelized correlation filters," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.37, no.3, pp.583-596, 2015.
[3] W. Hu, T. Tan, L. Wang, and S. Maybank, "A survey on visual surveillance of object motion and behaviors," IEEE Trans. on Systems, Man, and Cybernetics, Part C:Applications and Reviews, vol.34, no.3, pp.334-352, 2004.
[4] O. Masoud and P. Papanikolopoulos, "A novel method for tracking and counting pedestrians in real-time using a single camera," IEEE Trans. on Vehicular Technology, vol.50, no.5, pp.1267-1278, 2001.
[5] N. Paragios and C. Tziritas, "Detection and location of moving objects using deterministic relaxation algorithms," in Proc. 13th Int. Conf. on Pattern Recognition, Vienna , Austria, Aug.25-29, 1996, pp.201-205.
[6] D. J. Fleet and A. D. Jepson, "Computation of component image velocity from local phase information," Int. Journal of Computer Vision, vol.5, no.1, pp.77-104, 1990.
[7] M. L. Wang, C. C. Huang, and H. Y. Lin, "An intelligent surveillance system based on an omnidirectional vision sensor," in Proc. IEEE Conf. on Cybernetics and Intelligent Systems, Bangkok, Thailand, Jun.7-9, 2006, pp.1-6.
[8] D. S. Lee, "Effective gaussian mixture learning for video background subtraction," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.27, no.5, pp.827-832, 2005.
[9] N. M. Oliver, B. Rosario, and A. P. Pentland, "A Bayesian computer vision system for modeling human interactions," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.8, pp.831-843, 2000.
[10] T. E. Boult, R. Micheals, X. Gao, P. Lewis, C. Power, W. Yin, and A. Erkan, "Frame-rate omnidirectional surveillance and tracking of camouflaged and occluded targets," in Proc. 2nd IEEE Workshop on Visual Surveillance, Fort Collins, Colorado, Jun.26, 1999, pp.48-55.
[11] J. Heikkila and O. Silven, "A real-time system for monitoring of cyclists and pedestrians," in Proc. 2nd IEEE Workshop on Visual Surveillance, Fort Collins, Colorado, Jun.26, 1999, pp.74-81.
[12] Z. Kim, "Real time object tracking based on dynamic feature grouping with background subtraction," in Proc. IEEE Conf. on Conputer Vision and Pattern Recognition, Anchorage, AK, Jun.23-28, 2008, pp.1-8.
[13] J. M. Rehg, M. Loughlin, and K. Waters, "Vision for a smart kiosk," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, San Juan , Puerto Rico, Jun.17-19, 1997, pp.690-696.
[14] H. Unno, K. Ojima, K. Hayashibe, and H. Saji, "Vehicle motion tracking using symmetry of vehicle and background subtraction," in Proc. IEEE Workshop on Vehicles, Istanbul, Turkey, Jun.13-15, 2007, pp.1127-1131.
[15] J. Yao and J. M. Odobez, "Multi-layer background subtraction based on color and texture," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Minneapolis, Minnesota, Jun.17-22, 2007, pp.1-8.
[16] R. K. Meghana, "Background-modelling techniques for foreground detection and Tracking using Gaussian Mixture Model," in Proc. the 3rd Int. Conf. on Computing Methodologies and Communication, Erode, India, Mar.27-29, 2019, pp.1129-1134.
[17] R. T.Collins, A. J. Lipton, and T. Kanade, A System for Video Surveillance and Monitoring, Technical Report, CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University, May 2000.
[18] T. Horprasert, D. Harwood, and L. Davis, "A robust background subtraction and shadow detection," in Proc. Asian Conf. on Computer Vision, Taipei, Taiwan, Jan.8-11, 2000, pp.983-988.
[19] O. Javed, K. Shafique, and M. Shah, "A hierarchical approach to robust background subtraction using color and gradient information," in Proc. Workshop on Motion and Video Computing, Orlando, Florida, Dec.5-6, 2003, pp.22-27.
[20] W. Lu and Y. P. Tan, "A color histogram based people tracking system," in Proc. IEEE Int. Symp. on Circuits and Systems, Sydney, Australia, May 6-9, 2002, pp.137-140.
[21] L. Haritaoglu, D. Harwood, and L. Davis, "W4: real-time surveillance of people and their activities," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.8, pp.809-830, 2002.
[22] R. C. Cucchiara, C. Grana, G. Neri, M. Piccardi, and A. Prati, "The sakbot system for moving object detection and tracking," in Video-Based Surveillance Systems:Computer Vision and Distributed Processing, Kluwer Academic Publishers, Massachusetts, 2001, pp.145-158.
[23] J. S. Hu, T. M. Su, and S. C. Jeng, "Robust background subtraction with shadow and highlight removal for indoor surveillance," in Proc. IEEE Int. Conf. on Intelligent Robots and Systems, Beijing, China, Oct.9-15, 2006, pp.4545-4550.
[24] X. Wang, "Intelligent multi-camera video surveillance: A review," Pattern Recognition Letters, vol.34, no.1, pp.3-19, 2013.
[25] M. Liem and D. M. Gavrila, "Multi-person tracking with overlapping cameras in complex, dynamic environments," in Proc. British Machine Vision Conf., London, UK, Sep.7-10, 2009. Proceedings.
[26] R. Carroll, M. Agrawal, and A. Agarwala, "Optimizing content-preserving projections for wide-angle images," ACM Trans. on Graphics-TOG, vol.28, no.3, pp.1-9, 2009.
[27] D. B. Gennery, "Generalized camera calibration including fish-eye lenses," Int. Journal of Computer Vision, vol.68, no.3, pp.239-266, 2006.
[28] C. Hughes, M. Glavin, and E. Jones, "Simple fish-eye calibration method with accuracy evaluation," Electronic Letters on Computer Vision and Image Analysis, vol.10, no.1, pp.54-62, 2011.
[29] T. N. Mundhenk, M. J. Rivett, X. Liao, and E. L. Hall, "Techniques for fisheye lens calibration using a minimal number of measurements," in Proc. Conf. on Intelligent Robots and Computer Vision XXI: Algorithms, Techniques, and Active Vision, Boston, MA, Nov.7-8, 2000, pp.181-190.
[30] R. Eshel and Y. Moses, "Homography based multiple camera detection and tracking of people in a dense crowd," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, Alaska, Jun.23-28, 2008, pp.1-8.
[31] J. Shao, N. Dong, F. Liu, and Z. Li, "A close-loop tracking approach for multi-view pedestrian tracking," Journal of Computational Information Systems, vol.7, no.2, pp.539-547, 2011.
[32] S. Khan and M. Shah, "Consistent labeling of tracked objects in multiple cameras with overlapping fields of view," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.25, no.10, pp.1355-1360, 2003.
[33] C. H. Sio, H. H. Shuai and W. H. Cheng, "Multiple fisheye camera tracking via real-time feature clustering," in Proc. the 1st ACM Int. Conf. on Multimedia in Asia, Beijing, China, Dec.15-18, 2019, pp.1-6.
[34] O.Javed, Z. Rasheed, K. Shafique, and M. Shah, "Tracking across multiple cameras with disjoint views," in Proc. IEEE Int. Conf. on Computer Vision, Nice, France, Oct.13-16, 2003, pp.952-957.
[35] B. Song and A. K. Roy-Chowdhury, "Robust tracking in a camera network: A multi-objective optimization framework," IEEE Journal on Select. Top. in Signal Process., vol.2, no.4, pp.582-596, 2008.
[36] O. Javed, K. Shafique, Z. Rasheed, and M. Shah, "Modeling inter-camera space-time and appearance relationships for tracking across non-overlapping views," Computer Vision and Image Understanding, vol.109, no.2, pp.146-162, 2008.
[37] A. Rahimi, B. Dunagan, and T. Darrell, "Simultaneous calibration and tracking with a network of non-overlapping sensors," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Washington DC, Jun.27-Jul.2, 2004, pp.I187-I194.
[38] X. Chen, L. An, and B. Bhanu, "Multitarget tracking in nonoverlapping cameras using a reference set," IEEE Sensors Journal, vol.15, no.5, pp.2692-2704, 2015.
[39] K. Zhang, L. Zhang, and M.-H. Yang, "Real-time compressive tracking," in Proc. European Conf. on Computer Vision, Firenze, Italy, Oct.7-13, 2012, vol.7574, no.3, pp.864-877.
[40] Z. Kalal, K. Mikolajczyk, and J. Matas, "Tracking-learning-detection," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.34, no.7, pp.1409-1422, 2012.
[41] B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," in Proc. Int. Joint Conf. on Artificial Intelligence, Vancouver, British Columbia, Canada, Aug.24-28, 1981, pp.674-679.
[42] R. E. Kalman, "A new approach to linear filtering and prediction problems," Journal of Basic Engineering, vol.82, no.1, pp.35-45, 1960.
[43] D. Comaniciu and P. Meer, "Mean shift: a robust approach toward feature space analysis," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.24, no.5, pp.603-619, 2002.
[44] K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, "Real-time foreground-background segmentation using codebook model," Real-Time Imaging, vol.11, no.3, pp.172-185, 2005.
[45] E. Rosten and T. Drummond, "Machine learning for high-speed corner detection," in Proc. European Conf. on Computer Vision, Graz, Austria, May 7-13, 2006, vol.3951, pp.430-443.
[46] C. Harris and M. Stephens, "A combined corner and edge detector," in Proc. Alvey Vision Conf., Oxford, UK, Aug.31-Sep.2, 1988, pp.147-151.
[47] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, "BRIEF: binary robust independent elementary features," in Proc. European Conf. on Computer Vision, Hersonissos, Greece, Sep.5-11, 2010, vol.6314, no.4, pp.778-792.
[48] D. G. Lowe, "Distinctive image features from scale invariant keypoints," Int. Journal of Computer Vision, vol.60, no.2, pp.91-110, 2004.
[49] H. Bay, T. Tuytelaars, and L. VanGool, "SURF: Speeded up robust features," in Proc. European Conf. on Computer Vision, Graz, Austria, May 7-13, 2006,vol. 3951, pp.404-417.
[50] X. Cheng, N. Li, S. Zhang, and Z. Wu, "Robust visual tracking with SIFT features and fragments based on particle swarm optimization," Circuits, Systems, and Signal Processing, vol.33, no.5, pp.1507-1526, 2014.
[51] J. Kennedy and R. Eberhart, "Particle swarm optimization," in Proc. IEEE Conf. on Particle swarm optimization, Perth, Australia, Nov.27-Dec.1, 1995, vol.4. pp.1942-1948.
[52] Q. Miao, G. Wang, C. Shi, X. Lin, and Z. Ruan, "A new framework for on-line object tracking based on SURF," Pattern Recognition Letters, vol.32, no.13, pp.1564-1571, 2011.
[53] H. Grabner and H. Bischof, "On-line boosting and vision," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, New York, NY, Jun.17-22, 2006, vol.1, pp.260-267.
[54] I. Leichter, M. Lindenbaum, and E. Rivlin, "Mean shift tracking with multiple reference color histograms," Computer Vision and Image Understanding, vol.114, no.3, pp.400-408, 2010.
[55] J. Ma, H. Luo, B. Hui, and Z. Chang, "Robust scale adaptive tracking by combining correlation filters with sequential monte carlo," Sensors (Basel, Switzerland), vol.17, no.3, pp.512-528, 2017.
[56] Z. Kalal, K. Mikolajczyk, and J. Matas, "Forward-backward error: automatic detection of tracking failures," in Proc. IEEE Conf. on Pattern Recognition, Istanbul, Turkey, Aug.23-26, 2010, pp.2756-2759.
[57] T. Ojala, M. Pietikainen, and T. Maenpaa, "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.24, no.7, pp.971-987, 2002.
[58] D. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui, "Visual object tracking using adaptive correlation filters," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, Jun.13-18, 2010, pp.2544-2550.
[59] C. Cortes and V. Vapnik, "Support-vector networks," Machine Learning, vol.20, no.3, pp.273-297, 1995.
[60] Y. Freund and R. Schapire, "A desicion-theoretic generalization of on-line learning and an application to boosting," Computer and System Sciences, vol.55, no.1, pp.119-139, 1995.
[61] H. Lu, W. Zhang, F. Yang, and X. Wang, "Robust tracking based on PSO and on-line AdaBoost," in Proc. IEEE Conf. on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, Sep.12-14, 2009, pp.690-693.
[62] W. Zhu, S. Wang, R.-S. Lin, and S. Levinson, "Tracking of object with SVM regression," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Kauai, Hawaii, Dec.8-14, 2001, vol.2, pp.240-245.
[63] B. Babenko, M.-H. Yang, and S. Belongie, "Visual tracking with online multiple instance learning," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Miami, FL, Jun.20-25, 2009, pp.983-990.
[64] S. Ghidoni, A. Pretto, and E. Menegatti, "Cooperative tracking of moving objects and face detection with a dual camera sensor," in Proc. IEEE Int. Conf. on Robotics and Automation, Padova, Italy, May.3-7, 2010, pp.1050-4729.
[65] H. Liu, W. Pi, and H. Zha, "Motion detection for multiple moving targets by using an omnidirectional camera," in Proc. IEEE Int. Conf. on Robotics, Intelligent Systems and Signal Processing, Beijing, China, Oct.8-13, 2003, pp.422-426.
[66] J. T. Tou and R. C. Gonazalez, Pattern Recognition Principles, Addison-Wesley, Canada, 1981, Ch.4, pp.90-93.
[67] F. Devernay and O. Faugeras, "Straight lines have to be straight," Machine Vision and Applications, vol.13, no.1, pp.14-24, 2001.
[68] H. A. Bayer, "Accurate calibration of CCD camera," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Champaign, IL, Jun.15-18, 1992, pp.96-101.
[69] Mohana and H. R. Aradhya, " Performance evaluation of background modeling methods for object detection and tracking," in Proc. the 2020 Fourth Int. Conf. on Inventive Systems and Control in Coimbatore, India, Jan.8-10, 2020, pp. 413-420.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2020-12-22
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明