姓名 |
詹翔淵(Hsiang-Yuan Chan)
查詢紙本館藏 |
畢業系所 |
軟體工程研究所 |
論文名稱 |
基於相機參數之相機移動補償 (Camera motion compensation based on camera parameter)
|
相關論文 | |
檔案 |
[Endnote RIS 格式]
[Bibtex 格式]
[相關文章] [文章引用] [完整記錄] [館藏目錄] 至系統瀏覽論文 (2026-8-1以後開放)
|
摘要(中) |
物件追蹤,旨在連續影像序列中檢測出目標位置,並且預測其運動軌跡,逐幀更新目標的位置,要完成這項任務,現今多採用兩種特徵將不同幀之間的相同目標關聯起來,其一是透過目標的外觀特徵透過匹配前後幀偵測框中的像素特徵來得到目標當前的位置,其二是通過目標的運動特徵,利用 Kalman filter 等動態系統模型,藉由目標的歷史位置來預測目標的當前位置,得到兩種特徵後,將兩種特徵按照不同的情況,對其採用不同的權重比例計算預測框及檢測框之間的距離成本,最後透過匈牙利演算法,將成本最低的預測框及檢測框進行匹配並賦予其相應的 ID 完成追蹤。在動態相機的情況下,其中的運動特徵如果只透過傳統的 Kalman filter 來進行預測可能會導致偏差,原因是當相機移動時,影像的像素會根據相機的移動方向產生反向的偏移,而這個偏移是 Kalman filter 無法自行估計的,這將造成預測出來的目標位置產生偏差進而造成目標丟失。為了解決這個問題,2022 年,BoT-SORT 模型透過 CMC 相機移動補償 (Camera Motion Compensation) 來處理這個問題,他們利用稀疏光流來預測特徵點在幀與幀之間的移動,進而計算出相機移動造成的誤差,並提出了一套方法利用計算出的誤差對 Kalman filter 的預測進行修正。這一方法使得動態相機下的追蹤效能得到了顯著的進步。然而,在這其中仍然存在改良的空間,用來計算偏差的稀疏光流,其帶來的計算量仍是不可忽視的,並且由於是基於影像的特徵點進行預測的關係,影像的品質在一定程度上也影響了預測的準確度。在本文中,我們提出了一種透過相機參數來計算相機移動誤差,並透過相機移動補償機制對 Kalman filter 進行校正的方法,該方法透過針孔相機模型的原理轉換3D 空間的相機移動與 2D 空間的影像偏差,不僅降低了稀疏光流造成的龐大計算量,只需要低廉的計算成本,並且可以避免依賴影像的特徵點,同時,我們透過目標追蹤相關的論文得到啟發,在關聯階段將預測框以及偵測框根據目標之間所處的位置進行擴張,減少因為微小誤差而導致的追蹤丟失,這使得我們的追蹤模型可以達到更快速、更穩定的效能。 |
摘要(英) |
Object tracking aims to detect the position of a target in a continuous image sequence,predict its movement trajectory, and update the position of the target frame by frame.To accomplish this task, two features are currently used to associate the same target between different frames. One is to obtain the current position of the target through theappearance characteristics of the target by matching the pixel features in the detectionframe of the previous and next frames. The second is to use the motion characteristicsof the target and use dynamic system models such as Kalman filter to use the historyof the target. position to predict the current position of the target. After obtaining thetwo features, the two features are used according to different situations, using differentweight ratios to calculate the distance cost between the prediction frame and the detectionframe. Finally, through the Hungarian algorithm, The lowest-cost prediction frame anddetection frame are matched and assigned corresponding IDs to complete tracking.In the case of a dynamic camera, if the motion features are predicted only through thetraditional Kalman filter, it may lead to bias. The reason is that when the camera moves,the pixels of the image will be shifted in the opposite direction according to the movingdirection of the camera. This offset cannot be estimated by the Kalman filter by itself,which will cause the predicted target position to deviate and cause the target to be lost.In order to solve this problem, in 2022, the BoT-SORT model dealt with this problemthrough CMC camera motion compensation (Camera Motion Compensation). They usedsparse optical flow to predict the movement of feature points between frames, and thencalculated the camera Errors caused by movement, and a set of methods are proposedto use the calculated errors to correct the predictions of the Kalman filter. This methodsignificantly improves the tracking performance under dynamic cameras. However, thereis still room for improvement. The amount of calculation caused by the sparse opticalflow used to calculate the deviation cannot be ignored, and since prediction is based onthe feature points of the image, the quality of the image is limited to a certain extent.The degree also affects the accuracy of prediction.In this article, we propose a method to calculate the camera movement error throughcamera parameters and correct the Kalman filter through the camera movement compensation mechanism. This method converts the camera movement in the 3D space throughthe principle of the pinhole camera model. The image deviation from the 2D space notonly reduces the huge amount of calculation caused by sparse optical flow, but also requires low calculation cost and avoids relying on the feature points of the image. At thesame time, we were inspired by papers related to target tracking and related In this stage,the prediction frame and the detection frame are expanded according to the position between the targets to reduce tracking loss due to small errors, which allows our trackingmodel to achieve faster and more stable performance. |
關鍵字(中) |
★ 物件追蹤 ★ 相機參數 ★ 相機移動補償 |
關鍵字(英) |
★ Object tracking ★ Camera parameter ★ Camera motion compensation |
論文目次 |
摘要 iii
Abstract v
Contents ix
List of Figures xi
List of Tables xiii
1 Introduction 1
2 Related Work 5
2.1 Deep OC-SORT ............................................................... 5
2.2 BoT-SORT .................................................................... 9
2.3 SMILEtrack ................................................................... 12
2.4 UCMCTrack................................................................... 14
2.5 ExpansionIoU ................................................................. 17
3 Method 19
3.1 Camera motion compensation ................................................ 20
3.2 Rotation angle compensation ................................................. 24
3.3 Expand Bounding box ........................................................ 28
4 Experiment 31
4.1 Datasets ....................................................................... 31
4.2 Implementation Details ....................................................... 32
4.3 Experimental Results ......................................................... 33
5 Conclusion 37
6 Reference 39 |
參考文獻 |
[1] Zhang, Yifu, Chunyu Wang, Xinggang Wang, Wenjun Zeng and Wenyu Liu.“FairMOT: On the Fairness of Detection and Re-identification in Multiple Object Tracking.”
International Journal of Computer Vision 129 (2020): 3069 - 3087.
[2] Liu, Zelin, Xinggang Wang, Cheng Wang, Wenyu Liu and Xiang Bai. “SparseTrack: Multi-Object Tracking by Performing Scene Decomposition based on PseudoDepth.”ArXiv abs/2306.05238 (2023): n. pag.
[3] Aharon, Nir, Roy Orfaig and Ben-Zion Bobrovsky. “BoT-SORT: Robust Associations Multi-Pedestrian Tracking.”ArXiv abs/2206.14651 (2022): n. pag.
[4] Maggiolino, Gerard, Adnan Ahmad, Jinkun Cao and Kris Kitani. “Deep OCSort: Multi-Pedestrian Tracking by Adaptive Re-Identification.”2023 IEEE International
Conference on Image Processing (ICIP) (2023): 3025-3029.
[5] Wang, Yuhan, Jun-Wei Hsieh, Ping-Yang Chen, Ming-Ching Chang, Hung HinSo and Xin Li. “SMILEtrack: SiMIlarity LEarning for Occlusion-Aware Multiple Object
Tracking.”AAAI Conference on Artificial Intelligence (2022).
[6] Yi, Kefu, Kai Luo, Xiaolei Luo, Jiangui Huang, Hao Wu, Rongdong Hu and Wei
Hao. “UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation.”AAAI Conference on Artificial Intelligence (2023).
[7] Huang, Hsiang-Wei, Cheng-Yen Yang, Jenq-Neng Hwang and Chung-I Huang.“Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking
in Sports.”2024 IEEE/CVF Winter Conference on Applications of Computer Vision
Workshops (WACVW) (2023): 163-172.
[8] Cao, Jinkun, Xinshuo Weng, Rawal Khirodkar, Jiangmiao Pang and Kris Kitani.
“Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking.”2023
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022):
9686-9696.
[9] Zeng, Fangao, Bin Dong, Tiancai Wang, Cheng Chen, X. Zhang and Yichen
Wei. “MOTR: End-to-End Multiple-Object Tracking with TRansformer.”ArXiv abs/
2105.03247 (2021): n. pag.
[10] Zhou, Xingyi, Tianwei Yin, Vladlen Koltun and Philipp Krähenbühl. “Global
Tracking Transformers.”2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR) (2022): 8761-8770. |
指導教授 |
施國琛(Shih, Timothy K.)
|
審核日期 |
2024-7-15 |
推文 |
facebook plurk twitter funp google live udn HD myshare reddit netvibes friend youpush delicious baidu
|
網路書籤 |
Google bookmarks del.icio.us hemidemi myshare
|