博碩士論文 101523009 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:19 、訪客IP:54.87.61.215
姓名 陳筱慈(Hsiao-Tzu Chen)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 基於共推論融合的混合影像序列之強健視覺追蹤
(Robust Visual Tracking Using Co-inference Fusion for Mixed Sequence)
相關論文
★ 應用於車內視訊之光線適應性視訊壓縮編碼器設計★ 以粒子濾波法為基礎之改良式頭部追蹤系統
★ 應用於空間與CGS可調性視訊編碼器之快速模式決策演算法★ 應用於人臉表情辨識之強健式主動外觀模型搜尋演算法
★ 結合Epipolar Geometry為基礎之視角間預測與快速畫面間預測方向決策之多視角視訊編碼★ 基於改良式可信度傳遞於同質區域之立體視覺匹配演算法
★ 以階層式Boosting演算法為基礎之棒球軌跡辨識★ 多視角視訊編碼之快速參考畫面方向決策
★ 以線上統計為基礎應用於CGS可調式編碼器之快速模式決策★ 適用於唇形辨識之改良式主動形狀模型匹配演算法
★ 以運動補償模型為基礎之移動式平台物件追蹤★ 基於匹配代價之非對稱式立體匹配遮蔽偵測
★ 以動量為基礎之快速多視角視訊編碼模式決策★ 應用於地點影像辨識之快速局部L-SVMs群體分類器
★ 以高品質合成視角為導向之快速深度視訊編碼模式決策★ 以運動補償模型為基礎之移動式相機多物件追蹤
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在視覺追蹤時,有時無法避免在影像擷取時受到鏡面反射(specular reflection)的影響。由於現有文獻很少探討此類問題對追蹤效能的影響,因此本論文提出基於共推論融合的強健視覺追蹤演算法,利用反射分離(reflection separation)獲取混合影像(mixed images)之中的照度影像(illumination image),以估算無反射遮罩(non-reflection mask),指出混合影像之中的無反射的區域,建立較準確目標物的色彩分布模型。為了避免連續畫面中反射影像隨相機動量而產生變動,因此在分離反射之前先計算相機動量,並補償相機動量於混合影像上。本論文的整體的追蹤系統採用粒子濾波器(particle filter)為基礎,將RGB色彩分布資訊及無反射遮罩所建立的[I、R-G、Y-B]色彩分布資訊,將兩者資訊交互影響與融合,最後利用最大化似然機率(maximum likelihood)優化每顆粒子的權重。實驗結果中顯示,本論文提出的追蹤演算法能有效的克服在行動相機的情況下,混合影像序列所受到相機動量的影響,因此能有效的提高追蹤準確性。
摘要(英) For visual tracking, mixed images cannot be avoided since the transmitted scene may be captured with specular reflections. There are few previous method tackling this important problem, thus this paper proposes a novel robust visual tracking method using co-inference fusion for mixed sequences. Based on the framework of particle filter with compensated motion model, this paper adopts the co-inference method to fuse two types of color measurements of the target. Although both measurements are observed from the same mixed image, one of them is built based on a non-reflection mask, constructed from the illumination image. The proposed scheme adopts reflection separation to derive an illumination image and a reflection image from mixed images before tracking. To satisfy the time-invariant assumption of a reflection image, camera motion is compensated on each mixed image before reflection separation. Finally, the weight of each particle is individually optimized using maximum likelihood. Experimental results show that the proposed scheme effectively improves the tracking accuracy on mixed sequence with camera motion.
關鍵字(中) ★ 視覺追蹤
★ 反射分離
★ 粒子濾波器
★ 交互推論融合
★ 最大似然機率
關鍵字(英) ★ Visual tracking
★ reflection separation
★ particle filter,
★ co-inference fusion
★ maximum likelihood
論文目次 摘要 I
Abstract II
誌謝 III
目錄 IV
圖目錄 VI
表目錄 VIII
第一章 緒論 1
1.1 前言 1
1.2 研究動機 1
1.3 研究方法 2
1.4 論文架構 3
第二章 以粒子濾波器為基礎之物件追蹤 4
2.1 貝氏濾波器(Bayesian Filter) 4
2.2 粒子濾波器(Particle Filter) 6
2.2.1 循序重要取樣(Sequential Importance Sampling) 7
2.2.2 循序權重再取樣(Sampling Importance Resampling) 9
2.2.3 採用以色彩為基礎的適應性粒子濾波器之物件追蹤(Object Tracking with An Adaptive Color Based Particle Filter) 10
2.3 以粒子濾波器為基礎之多重資訊物件追蹤技術 12
2.4 總結 14
第三章 混合影像之反射分離技術 15
3.1 反射影像分離技術現況 15
3.2 獨立成分分析之反射分離技術(Separating Reflections And Lighting Using Independent Component Analysis) 19
3.3 獲取影像本質之盲訊號分離技術(Deriving Intrinsic Images from Image Sequences) 21
3.4 總結 22
第四章 本論文所提出之混合影像的視覺追蹤 23
4.1 系統架構 23
4.2 以運動補償模型為基礎之預測(Motion Compensated Model for Prediction ) 25
4.3 混合影像之運動補償 26
4.4 無反射遮罩之建構與無反射權重估測 29
4.5 雙色彩模型資訊之交互推論與融合 30
4.6 優化粒子權重之最大似然法 33
4.7 總結 34
第五章 實驗結果與討論 35
5.1 實驗參數與測試影片規格 35
5.2 追蹤系統實驗結果 36
5.2.1 追蹤準確度(Tracking Accuracy) 37
5.2.2 估測位置的變異性(Variance of Estimation Position) 41
5.2.3 粒子衰退性(Degeneracy Problem) 41
5.2.4 時間複雜度(Time Complexity) 43
5.3 總結 45
第六章 結論與未來展望 46
參考資料 47
參考文獻 [1] T. Kato, Y. Ninomiya, and I. Masaki,“Preceding Vehicle Recognition Based on Learning from Sample Images,” IEEE Trans. Intelligent Transportation Systems, Vol. 3, No. 4, pp. 252-260, Dec. 2002.
[2] Y. Wang, E. K. Teoh, and D. Shen, “Lane Detection and Tracking Using B-Snake,” Image and Vision Computing, Vol. 22, No. 4, pp. 269-280, Apr. 2004.
[3] D. W. Hansen and Q. Ji, “In the Eye of the Beholder: A Survey of Models for Eyes and Gaze,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 32, No. 3, pp. 478-500, Mar. 2010.
[4] M. A. Elgharib, F. Pitie, A. Kokaram, and V. Saligrama, “User-Assisted Reflection Detection and Feature Point Tracking,” in Proc. European Conference on Visual Media Production, No. 13, pp. 13-23, Nov. 2013.
[5] M. A. Ahmed, F. Pitie, and A. Kokaram, “Reflection Detection in Image Sequences,” in Proc. IEEE International conference on Computer Vision and Pattern Recognition, pp. 705-712, Providence Rhode, Island, June 2011.
[6] DLR Robotics and Mechatronics Center, Research: Texture-Based Tracking under Specular Reflections. http://www.dlr.de/rm/en/desktopdefault.aspx/tabid-3810/6235_read-9004/
[7] Y. Weiss, “Deriving Intrinsic Images from Image Sequences,” in Proc. IEEE International Conference on Computer Vision, Vol. 2, pp. 68–75, British Columbia, Canada, July 2001.
[8] J.-Y. Lu, Y.-C. Wei, and C.-W. Tang, “Visual Tracking Using Compensated Motion Model for Mobile Cameras,” in Proc. IEEE International Conference on Image Processing, pp. 489-492, Brussels, Belgium, Sept. 2011.
[9] Y. Wu and T. Huang, “Robust Visual Tracking by Integrating Multiple Cues Based on Co-inference Learning.” International Journal of Computer Vision, Vol. 58, No. 1, pp. 55-71, June 2004.
[10] C. Bao, Y. Wu, H. Ling, and H. Ji, “Real Time Robust L1 Tracker Using Accelerated Proximal Gradient Approach,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1830-1837, Providence Rhode, Island, June 2012.
[11] H.-T. Chen and C.-W. Tang, “Visual Tracking Using Blind Source Separation for Mixed Images,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, Florence, Italy, May 2014.
[12] A. Yilmaz, O. Javed, and M. Shah. “Object Tracking: A Survey,” ACM Computing Surveys, Vol. 38, No.4, pp. 1-45, Dec. 2006.
[13] M. Isard and A.Blake, “Condensation-Conditional Density Propagation for Visual Tracking,” International Journal of Computer Vision, Vol. 29, No. 1, pp. 5-28, 1998.
[14] N. Gordon, M. Arulamalam, S. Maskell, and T. Clapp, “A Tutorial on Particle Filters for On-line Non-linear/Non-Gaussian Bayesian Tracking,” IEEE Trans. Signal Processing, Vol. 50, No. 2, pp. 174-188, Feb. 2002.
[15] G. Kitagawa, “Monte Carlo Filter and Smoother for Non-Gaussian Nonlinear State Space Models,” International Journal of Computational and Graphical Statistics, Vol. 5 No. 1, pp. 1-25, Mar. 1996.
[16] A. Lehuger, P. Lechat, and P. Perez, “An Adaptive Mixture Color Model for Robust Visual Tracking,” in Proc. IEEE International Conference on Image Processing, pp. 573-576, Oct. 2006.
[17] P. Perez, C.Hue, J. Vermaak, and M. Gangnet, “Color-Based Probabilistic Tracking,” in Proc. European Conference on Computer Vision, pp. 661-675, Berlin, May 2002.
[18] K. Nummiaro, E. Koller-Meier, and L. V. Gool, “An Adaptive Color Based Particle Filter,” Image and Vision Computing, Vol. 21, No. 1, pp. 99-110, January 2003.
[19] D. Serby, E. K. Meier and L. Van Gool, “Probabilistic Object Tracking Using Multiple Features,” in Proc. IEEE International Conference on Pattern Recognition, Vol. 2, pp. 184-187, Aug. 2004.
[20] C. Rasmussen and G. D. Hager, “Probabilistic Data Association Methods for Tracking Complex Visual Objects,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 2, No. 3, pp. 560–576, Jun. 2001.
[21] G. Hua and Y. Wu, “Measurement Integration Under Inconsistency for Robust Tracking,” in Proc. IEEE International Conference on Computer vision and Pattern Recognition, Vol. 23, No. 3, pp. 650–657, Jun. 2006.
[22] C. Shen, A. van den Hengel, and A. Dick “Probabilistic Multiple Cue Integration for Particle Filter Based Tracking,” in Proc. International Conference on Digital Image Computing Techniques and Applications, pp. 309-408, Oct. 2003.
[23] Y. Wu, E. Blasch, G. Chen, B. Li, and H. Ling, “Multiple Source Data Fusion via Sparse Representation for Robust Visual Tracking,” in Proc. IEEE International Conference on Information Fusion, Vol. 1, No. 8, pp. 5-8, July 2011.
[24] S. Shafer, “Using Color to Separate Reflection Components,” Technical Report TR-136 Dept. of Computer Science, University of Rochester, San Francisco, 1984.
[25] H. G. Barrow and J. M. Tenenbaum. “Recovering Intrinsic Scene Characteristics from Images,” In A. Hanson and E. Riseman, editors, Computer Vision Systems. Academic Press, pp. 3-26, New York, USA, Apr. 1978.
[26] H. Farid and E. Adelson, “Separating Reflection and Lighting Using Independent Component Analysis,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, pp. 262-267, Fort Collins, USA, June 1999.
[27] M. Yamazaki, Y. W. Chen, and G. Xu, “Separating Reflections from Images Using Kernel Independent Component Analysis,” in Proc. IEEE International Conference on Pattern Recognition, Vol. 3. pp. 194-197, Hong Kong, Aug. 2006.
[28] B. Sarel and M. Irani, “Separating Transparent Layers through Layer Information Exchange,” in Proc. European Conference Computer Vision, pp. 328-341, Prague Czech Republic, May 2004.
[29] K. Hara, I. Kohei, and U. Kiichi, “Separation of Layers from Images Containing Multiple Reflections and Transparency Using Cyclic Permutation,” in Proc. IEEE International Conference on Acoustics Speech and Signal Processing, pp. 1157-1160, Taipei, Taiwan, Apr. 2009.
[30] A. Levin and Y. Weiss, “User Assisted Separation of Reflections from A Single Image Using A Sparsity Prior,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 29, No. 9, pp. 1647-1655, Sept. 2007.
[31] A. Levin, A. Zomet, and Y. Weiss, “Separating Reflections from A Single Image Using Local Features,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 306-313, Washington, USA, June 2004.
[32] K. Gai, Z. W. Shi, and C. S. Zhang, “Blind Separation of Superimposed Moving Images Using Image Statistics,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 34, No. 1, pp. 19-32, Jan. 2012.
[33] H. Bay, A. Ess, T. Tuytelaar, and L. Van Gool, “SURF: Speeded Up Robust Features,” Computer Vision and Image Understanding, Vol. 110, No.3, pp. 346-359, June 2008.
[34] T. Young, The Bakerian lecture: On The Theory of Light And Colours, Phil. Trans. R. Soc. Lond. 92, pp. 12–48, Jan. 1802.
[35] D. H. Hubel, Eye, Brain, and Vision, Scientific American Library Series (Book 22), W. H. Freeman, 2nd Edition, New York, USA, May 15, 1995.
[36] PETS′2001 The Second IEEE International Workshop on Performance Evaluation of Tracking and Surveillance December 9, 2001 (in conjunction with IEEE CVPR′2001) Kauai, Hawaii, USA.
[37] H. Ling, L1_APG (Matlab, ~40M with data), the code implement the L1-APG http://www.dabi.temple.edu/~hbling/code_data.htm
指導教授 唐之瑋(Chih-Wei Tang) 審核日期 2014-7-18
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明