博碩士論文 975402016 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:38 、訪客IP:3.142.54.136
姓名 翁志嘉(Chih-Chia Weng)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱
(Vehicle tracking and detection via feature point matching and adaptive sampling in aerial surveillance videos)
相關論文
★ 影片指定對象臉部置換系統★ 以單一攝影機實現單指虛擬鍵盤之功能
★ 基於視覺的手寫軌跡注音符號組合辨識系統★ 利用動態貝氏網路在空照影像中進行車輛偵測
★ 以視訊為基礎之手寫簽名認證★ 使用膚色與陰影機率高斯混合模型之移動膚色區域偵測
★ 影像中賦予信任等級的群眾切割★ 航空監控影像之區域切割與分類
★ 在群體人數估計應用中使用不同特徵與回歸方法之分析比較★ 以視覺為基礎之強韌多指尖偵測與人機介面應用
★ 在夜間受雨滴汙染鏡頭所拍攝的影片下之車流量估計★ 影像特徵點匹配應用於景點影像檢索
★ 自動感興趣區域切割及遠距交通影像中的軌跡分析★ 基於回歸模型與利用全天空影像特徵和歷史資訊之短期日射量預測
★ Analysis of the Performance of Different Classifiers for Cloud Detection Application★ 全天空影像之雲追蹤與太陽遮蔽預測
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來,隨著無人飛行器(UAV)的生產技術不斷的進步,空拍影片的取得變得越來越容易,也因如此,針對空拍影片分析的研究,成為一個熱門的議題。而UAV拍攝影片的運用非常廣泛,舉凡軍事作戰、執法系統、搜救、以及交通的監控與管理等。
空拍影片的獲得,通常是將攝影機架設在飛機或UAV,由於此類飛行載具是在高空進行拍攝,因此影片具有視野廣、解析度低、拍攝高度多變的特性,此外,隨著飛行載具的移動,攝影機所拍攝的背景畫面也不斷地在改變,另因攝影機是架設在飛行載具上,攝影機也會有抖動、轉動、偏移等等現象,而地面上的車子本身也會移動,造成了空拍畫面中存在著許多的移動向量,也讓整個問題變得複雜許多。在本研究中,我們希望能對空拍影片中的車子,能有效的進行偵測與追蹤,我們運用TLD(Tracking – Learning - Detection)的架構,提出一個高效能的偵測與追蹤的方法。
為了能有效的偵測並追蹤空拍影片中的車子,一開始我們利用DBNs (Dynamic Bayesian Networks)先行偵測車子,接著使用SURF特徵點當作追蹤器進行追蹤。但是因為在空拍影片中,解析度通常是較低的,這也造成了特徵點會出現不穩定的現象,為了解決這個問題,我們運用蒙地卡羅的概念及補償的機制進行調整,並運用權重法進行特徵模型的更新,這樣的設計,讓追蹤的效果大幅的提升。另外,我們運用MHI (Motion History Images)及Corner Image的互補特性在車子的偵測上,實驗結果證明,這樣的方法能有效的改善DBNs偵測演算法存在的偵測錯誤問題。
摘要(英)
With rapid advance of Unmanned Aerial Vehicle (UAV) design and manufacture technologies, the analysis of aerial videos taken from aerial vehicle has become an important issue. It has a variety of applications, such as military, law enforcement, search and rescue, and traffic monitoring and management. One of the most important topics in aerial surveillance system is vehicle detection and tracking. In this study, we propose a vehicle tracking system for aerial surveillance videos. The experimental videos in this work are with low frame rate, low resolution, and variable altitude. To achieve our goal, we utilize Dynamic Bayesian Networks (DBNs) to detect vehicles in the initial step. And perform an adaptive detection model via Motion History Images (MHI) and corner images to detect vehicles during detection iteration. We also extract feature points on the detected vehicles for tracking. In the aerial videos with low contrast and low resolution, the feature points are not stable in successive frames. In order to solve this problem, we design a weighting scheme and incorporate the concept of Monte Carlo methods to update the vehicle feature points. In our proposed method, we perform particle sampling around the vehicle feature points and acquire the appearance models of the image patch at the sampled points. The sampling area is dynamically adjusted according to the matching conditions of vehicle feature points. The vehicle feature points are updated by the sampling point with the highest similarity measure with target appearance model. The weights of the feature points are also updated to give the confidence level of tracked feature points. Experimental results have validated that the proposed method can substantially improve the detection and tracking accuracy on a challenging dataset.
關鍵字(中) ★ 動態貝氏網路
★ 動態歷史圖片
★ 空拍影片
★ 車子追蹤
★ 車子偵測
關鍵字(英) ★ Dynamic Bayesian Networks
★ Motion History Images
★ Aerial Video
★ Vehicle Tracking
★ Vehicle Detection
論文目次 I. INTRODUCTION 1
II. OVERVIEW OF THE PROPOSED FRAMEWORK WITH RELEVANT TECHNIQUES 5
A. TLD (Tracking-Learning-Detection) 6
B. Initial Vehicle Detection 6
C. SURF (Speeded-Up Robust Features) 6
D. HOG (Histogram of Oriented Gradients) 9
E. MHI (Motion History Images) 10
III. THE PROPOSED ALGORITHM 11
A. Initial Detection Phase 11
A.1. Object Detection 11
A.2. Feature extraction 13
B. Learning Phase 13
C. Tracking Phase 15
C.1. Feature points extraction & Matching 15
C.2. Dynamic Particle Sampling 23
C.3. Vehicle Feature Points Update 29
D. Detection Phase 31
D.1. Perspective Subtraction: 32
D.2. MHI (Motion History Images): 36
D.3. Corner Image & AND Operation: 44
(1) Enhance Subtraction: 44
(2) Corner detection step: 47
D.4. Object Extraction: 48
IV. EXPERIMENTAL RESULTS 51
A.Tracking results 51
B.Detection results 54
C.Performance results 56
V. CONCLUSION 58
VI. FUTURE WORK 60
REFERENCES 61
參考文獻 1. R. Kumar, H. Sawhney, S. Samarasekera, S. Hsu, T. Hai, G. Yanlin, K. Hanna, A. Pope, R. Wildes, D. Hirvonen,M. Hansen, and P. Burt, “Aerial video surveillance and exploitation,” Proc. IEEE, vol. 89, no. 10, pp. 1518–1539, 2001.
2. I. Emst, S. Sujew, K. U. Thiessenhusen,M. Hetscher, S. Rabmann, and M. Ruhe, “LUMOS—Airbome traffic monitoring system,” in Proc. IEEE Intell. Transp. Syst., Oct. 2003, vol. 1, pp. 753–759.
3. L. D. Chou, J. Y. Yang, Y. C. Hsieh, D. C. Chang, and C. F. Tung, “Intersection- based routing protocol for VANETs,” Wirel. ers. Commun., vol. 60, no. 1, pp. 105–124, Sep. 2011.
4. S. Srinivasan, H. Latchman, J. Shea, T. Wong, and J. McNair, “Airborne traffic surveillance systems: Video surveillance of highway traffic,” in Proc. ACM 2nd Int. Workshop Video Surveillance Sens. Netw., 2004, pp. 131–135.
5. SONG X.,NEVATIA R., “Detection and tracking of movingvehicles in crowded scenes,” IEEE Workshop Motion and Video Computing, 2007.
6. Jong-Ho, C., L. Kang-Ho, et al., “Vehicle Tracking using Template Matching based on Feature Points,” IEEE International Conference on Information Reuse and Integration, 2006.
7. B. Johansson, J. Wiklund, P.-E. Forssen, and G. Granlund, “Combining shadow detection and simulation for estimation of vehicle size and position,” Pattern Recognition Letters, vol. 30, no. 8, 2009, pp. 751-759.
8. M. Vargas, J. M. Milla, S. L. Toral, and F. Barrero, “An enhanced background estimation algorithm for vehicle detection in urban traffic scenes,” IEEE Transactions on Vehicular Technology, vol. 59, no. 8, pp. 3694–3709, 2010.
9. N. A. Mandellos, et al., “A background subtraction algorithm for detecting and tracking vehicles,” Expert Systems with Applications, vol. 38, pp. 1619-1631, 2011.
10. J. Wood, “Statistical Background Models with Shadow Detection for Video Based Tracking,” LiTH-ISY-EX-07/3921-SE, Linkoping University, Sweden, March, 2007.
11. A. C. Shastry and R. A. Schowengerdt, “Airborne video registration and traffic-flow parameter estimation,” IEEE Trans. Intell. Transp. Syst., vol. 6, no. 4, pp. 391–405, Dec. 2005.
12. H. Cheng and J.Wus, “Adaptive region of interest estimation for aerial surveillance video,” in Proc. IEEE Int. Conf. Image Process., 2005, vol. 3, pp. 860–863.
13. X. Fan, H. Rhody, and E. Saber, “A spatial-feature-enhanced MMI algorithm for multimodal airborne image registration,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 6, pp. 2580–2589, Jun. 2010.
14. Z. Liu, J. An, and Y. Jing, “A simple and robust feature point matching algorithm based on restricted spatial order constraints for aerial image registration,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 2, pp. 514– 527, Feb. 2012.
15. J. Ma, J. C.-W. Chan, and F. Canters, “Fully automatic subpixel image registration of multiangle CHRIS/Proba data,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 7, pp. 2829–2839, Jul. 2010.
16. I. H. Mtir, K. Kaaniche, M. Chtourou, and P. Vasseur, “Aerial sequence registration for vehicle detection,” in Proceedings of the 9th International Multi-Conference on Systems, Signals and Devices (SSD ′12), pp. 1–6, 2012.
17. S. Hinz and A. Baumgartner, “Vehicle detection in aerial images using generic features, grouping, and context,” in Proc. DAGM-Symp., Sep. 2001, vol. 2191, Lecture Notes in Computer Science, pp. 45–52.
18. H. Cheng and D. Butler, “Segmentation of aerial surveillance video using a mixture of experts,” in Proc. IEEE Digit. Imaging Comput. —Tech. Appl., 2005, p. 66.
19. R. Lin, X. Cao, Y. Xu, C.Wu, and H. Qiao, “Airborne moving vehicle detection for urban traffic surveillance,” in Proc. 11th Int. IEEE Conf. Intell. Transp. Syst., Oct. 2008, pp. 163–167.
20. L. Hong, Y. Ruan, W. Li, D. Wicker, and J. Layne, “Energy-based video tracking using joint target density processing with an application to unmanned aerial vehicle surveillance,” IET Comput. Vis., vol. 2, no. 1, pp. 1–12, 2008.
21. R. Lin, X. Cao, Y. Xu, C.Wu, and H. Qiao, “Airborne moving vehicle detection for video surveillance of urban traffic,” in Proc. IEEE Intell. Veh. Symp., 2009, pp. 203–208.
22. J. Y. Choi and Y. K. Yang, “Vehicle detection from aerial images using local shape information,” Adv. Image Video Technol., vol. 5414, Lecture Notes in Computer Science, pp. 227–236, Jan. 2009.
23. J. Kang, I. Cohen, G. Medioni, C. Yuan, “Detection and Tracking of Moving Objects from a Moving Platform in Presence of Strong Parallax,” International Conference on Computer Vision, 2005.
24. S.M. Al-Garni and A. A. Abdennour, “Moving Vehicle Detection using Automatic Backgroung Extractuib,” World Academy of Science, Engineering and Technology 24,2006.
25. T. Ahonen, A. Hadid, and M. Pietikäinen, “Tracking the Rotating Targets in Aerial Videos,” IEEE Proceedings of the 10th World Congress on Intelligent Control and Automation, 2012
26. Z. Han, J. Jiao, B. Zhang, Q. Ye, and J. Liu, “Visual Object Tracking via Sample-Based Adaptive Sparse Representation (AdaSR),” Pattern Recognition, vol. 44, no. 9, pp. 2170-2183, 2011.
27. T. Pollard and M. Antone, “Detecting and tracking all moving objects in wide-area aerial video,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW ′12), pp. 15–22, 2012.
28. Zhou, H., Kong, H., Wei, L., Creighton, D., Nahavandi, S., "Efficient road detection and tracking for unmanned aerial vehicle." IEEE Transactions on Intelligent Transportation Systems, vol. 16, issue 1, pp. 297-309, 2015.
29. C. Aeschliman, J. Park, A.C. Kak, “Tracking Vehicles Through Shadows and Occlusions in Wide-Area Aerial Video,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, Issue 1.
30. Telea, A., “An image inpainting technique based on the fast marching method,”, Journal of Graphics Tools, vol. 9, No. 1, ACM Press 2004.
31. B. Uzkent, M.J. Hoffman, and A. Vodacek, “Real-time Vehicle Tracking in Aerial Video using Hyperspectral Features,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, June 2016.
32. X. Jiang, X. Cao, “Surveillance from above: a detection-and-prediction based multiple target tracking method on aerial videos,” IEEE Conference on Integrated Communications Navigation and Surveillance (ICNS). Herndon, VA, USA, April 2016.
33. M. Pouzet, P. Bonnin, J. Laneurit, et al. “Moving Targets Detection from UAV Based on a Robust Real-Time Image Registration Algorithm,” IEEE Conference on Image Processing (ICIP), 2014
34. Z. Kalal, K. Mikolajczyk, and J. Matas. “Tracking-learning-detection.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, 2012, pp.1409-1422.
35. Hsu-Yung Cheng, Chih-Chia Weng, and Yi-Ying Chen, “Vehicle Detection in Aerial Surveillance Using Dynamic Bayesian Networks,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 2152–2159, Apr. 2012.
36. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol.110, no.3, pages 346–359, 2008.
37. Davis, J.W., Bobick, A.F., “The representation and recognition of human movement using temporal templates,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.928-934, 1997.
38. Viola, P., Jones, M. “Rapid object detection using a boosted cascade of simple features,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, 2001, pp.511-518.
39. W. He, T. Yakayoshi, H. Lu, S. Lao, “SURF tracking,” Proceedings of the IEEE Conference on Computer Vision, 2009, pp. 1586–1592.
40. Dalad, N. and B. Triggs, ′′Histograms of oriented gradients for human detection,′′ Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, 2005, pp.886-893.
41. M. Muja, D. G. Lowe, "Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration", International Conference on Computer Vision Theory and Applications (VISAPP′09), 2009.
42. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 2003.
指導教授 鄭旭詠(HSU-YUNG CHENG) 審核日期 2017-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明