博碩士論文 100582601 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:7 、訪客IP:3.144.243.184
姓名 金功勳(Kanoksak Wattanachote)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於分析關鍵動量相關性之動態紋理轉換
(Dynamic Texture Transformation by Strategic Motion Coherence Analysis)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於保護影像中直線結構的細縫裁減系統★ 建基於開放式網路社群學習環境之社群推薦機制
★ 英語作為外語的互動式情境學習環境之系統設計★ 基於膚色保存之情感色彩轉換機制
★ 一個用於虛擬鍵盤之手勢識別框架★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發
★ 使用慣性傳感器構建即時人體骨架動作★ 基於多台攝影機即時三維建模
★ 基於互補度與社群網路分析於基因演算法之分組機制★ 即時手部追蹤之虛擬樂器演奏系統
★ 基於類神經網路之即時虛擬樂器演奏系統★ 即時手部追蹤系統以虛擬大提琴為例
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 改變影片中的動態紋理在動量、顏色等特性之分部,很可能會產生不一樣的 視覺效果,例如,在一個影片中的瀑布紋理可以轉換成火焰紋理或其他等等。本 論文利用分析關鍵動量,針對兩個動態紋理之間的轉換提出一個新的方法。
有複雜的形狀或動量的動態紋理,難以用具體化模型表示,也難以預測,特 別是要把它轉換成新的動量紋理。本論文基於小區域像素差異以及動量的相關性, 對連續影像的動態紋理轉換建構一個演算法。本論文真對設計的演算法提供了互 動式介面,成功演示所提出之演算法套用於豐富特殊效果的影片上。動態紋理轉 換的過程中我們處理了如何創造3D元素、分析動量關聯和元素間的配對問題。本 篇論文主要的貢獻包含兩個議題,第一個是對動量相關性的評估提出新的測量方 式,並用實際測試驗證此方式是否有用,亦即更接近人眼能接受的程度。第二個 是對自動化動態紋理轉換建構了一個演算法,此最佳化的演算法只需要使用者先 選擇不同的臨界值在第一個影格標記出紋理區域,剩下的轉換過程就會自動完成。 實驗結果顯示出動量相關性在元素配對和轉換中,能有效找出有關聯的動量區 域。
此外,每個動態紋理動量相關性的辨別在本論文中也有進行觀察和分析。從 區別動量相關性得到的貢獻,也許能被其他在研究動量相關的開發所利用。舉例 來說,將整合動量分析模型應用到現有的監視系統中,將為下一代數位保安系統 的帶來更多優點並大幅度改善現有缺點。
摘要(英) Changing dynamic texture appearances can create new looks in both motion and color appearance of videos. For instance, waterfall texture in a video scene can be appeared as a fire texture or vice versa. This dissertation proposes a novel method for dynamic texture transformation between two dynamic textures by strategic motion coherence analysis.
Dynamic textures with sophisticated shape and motion appearance are difficult to represent by physical models and are hard to predict, especially for transformation to new motion texture. This study proposes dynamic texture transformation algorithms for video sequences based on the difference of pixel intensity and motion coherence of patches. This research successfully applies the technology in many special effect videos, using an interactive tool developed. Our study addresses the issues of 3D patch creation, motion coherence analysis and patch matching for dynamic texture transformation. The main contribution includes two issues. First is to propose a new metric for evaluating motion coherence, with solid tests to justify the usefulness (close to human eye perception). The second is to propose algorithms for automatic dynamic texture transformation. An optimized algorithm only needs users to target the texture on the first frame, by using an optional threshold to determine the texture area. The rest process is complete automatically by our proposed algorithms. The experimental results show that the motion coherence index is effectively used to find the coherent motion region for patch matching and transformation.
Besides, the distinct of motion coherence for each dynamic texture is also observed and analyzed to discuss in this research. A contribution found from the distinct of motion coherence is possibly aimed to leverage the motion coherence for other system development. For instance, the digital security system of next generation by improving strong point and significantly complementing defects of the existing closed circuit television (CCTV) system, by integrating motion analysis module into the existing surveillance system.
關鍵字(中) ★ 動態紋理
★ 動量相關性分析
★ 動量模板配對
★ 影片編輯
★ 動態紋理轉換
★ 特殊效果
關鍵字(英) ★ Dynamic textures
★ Motion coherence analysis
★ Motion template matching
★ Video editing
★ Dynamic texture transformation
★ Special effects
論文目次 摘要.............................................................................................................i
ABSTRACT..............................................................................................ii
Acknowledgements................................................................................iii
Contents................................................................................................iv
List of Figures........................................................................................vi
List of Tables.........................................................................................xii
Chapter 1 Introduction .......................................................................... 1
1.1 Motivation ....................................................................................... 1
1.2 Background...................................................................................... 3
1.3 Dissertation Organization ..................................................................8
Chapter 2 Related Works ....................................................................... 9
2.1 Optical Flow.......................................................................................9
2.1.1 Horn-Schunck.............................................................................. 10
2.1.2 Lucas-Kanade ............................................................................. 11
2.1.3 Farneback ................................................................................... 12
2.2 Motion Vector Estimation ............................................................... 14
2.2.1 Block-matching ........................................................................... 14
2.2.2 Resultant Vector............................................................................15
2.3 Motion Angle Measurement ............................................................ 17
2.4 Connected Component Labeling and Filtering ................................ 18
2.5 HSV Components and Color Filtering .............................................. 20
2.6 Template Matching ......................................................................... 23
2.7 Probability Density Function and Cumulative Distribution Function..24
2.8 Patch Transformation without Strategic Motion Coherence............. 26
Chapter 3 Proposed Method..................................................................28
3.1 Video Motion Estimation..................................................................28
3.2 Dynamic Texture Segmentation........................................................29
3.2.1 User-defined Dynamic Texture Segmentation................................29
3.2.2 Semi-automatic Dynamic Texture Segmentation ..........................30
3.3 3D Patch Production ...................................................................... 37
3.4 Motion Coherence Analysis............................................................. 40
3.5 Patch Matching............................................................................... 48
3.6 The Final Patch Transformation .......................................................51
Chapter 4 Experimental Results and Discussions...................................53
4.1 Dynamic Texture Transformation.......................................................53
4.2 Transformation Results and Coherence Evaluation............................62
4.3 Dynamic Texture and Motion Coherence Index.................................94
4.4 Motion Coherence Index and Video Inpainting ................................ 97
Chapter 5 Conclusions and Future Works ............................................. 99
References ......................................................................................... 105 Appendix A ......................................................................................... 111 Appendix B ......................................................................................... 115
參考文獻 [1] G. Doretto, “Dynamic textures: modeling, learning, synthesis, animation, segmentation, and recognition,” Thesis, University of California, 2005.
[2] G. Doretto and S. Soatto, “Editable dynamic textures,” in IEEE Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society Conference, vol. 2, pp. II – 137–42, 2003.
[3] J. C. Tsai, T. K. Shih, K. Wattanachote and K. C. Li, “Video editing using motion inpainting,” in IEEE 26th International Conference on Advanced Information Networking and Applications (AINA), pp. 649–654, 2012.
[4] C. W. Lin and N. C. Cheng, “Video background inpainting using dynamic texture synthesis,” in IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1559–1562, 2010.
[5] L. Y. Wei, S. Lefebvre, V. Kwatra and G. Turk, “State of the Art in Example-based Texture Synthesis,” EUROGRAPHICS, 2009.
[6] F. Sándor, A. Tomer, C. Dmitry and K. Nahum, “Dynamic texture detection based on motion analysis,” Int J Comput Vis, 82: 48–63, 2009.
[7] R. Bohush and N. Brouka, “Smoke and flame detection in video sequences based on static and dynamic features,” in Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), pp. 20–25, 2013.
[8] L. M. Wang, X. L. Yan and L. P. Bu, “An edge growth segmentation arithmetic based on HSI color space modeling,” in Identification & Control (ICMIC), pp. 126–130, 2012.
[9] R. O′Malley, E. Jones and Glavin, “Rear-lamp vehicle detection and tracking in low-exposure color video for night conditions,” in Intelligent Transportation Systems, 11(2), pp. 453–462, 2010.
[10] P. L. Mazzeo, L. Giove, G. M. Moramarco, P. Spagnolo and M. Leo, “HSV and RGB color histograms comparing for objects tracking among non overlapping FOVs, using CBTF,” in Advanced Video and Signal-Based Surveillance (AVSS), pp. 498–503, 2011.
[11] X. F. Xing, K. L. Guo, S. Qiu and X. M. Xu, “Hand tracking by combining enhanced incremental learning and background model,” in Mechatronic Science, Electric Engineering and Computer (MEC), pp. 2385–2390, 2011.
[12] L. S. Silva and J. Scharcanski, "Video Segmentation Based on Motion Coherence of Particles in a Video Sequence," IEEE Trans. Image Processing, 19(4), pp.
1036–1049, 2010.
[13] W. L. Ouyang, R. Q. Zhang and W. K. Cham, “Fast pattern matching using
orthogonal Haar transform,” Computer Vision and Pattern Recognition (CVPR),
pp. 3050–3057, 2010.
[14] M. Delalandre, M. Iwata and K. Kise, “Fast and optimal binary template matching
application to manga copyright protection document analysis systems (DAS),”
IAPR, pp. 298–303, 2014.
[15] G. Farneback, “Two-frame motion estimation based on polynomial expansion,”
Lecture Notes in Computer Science. (2749), pp. 363–370, 2003.
[16] O. Smirg, Z. Smekal, M. K. Dutta and B. Kakani, “Automatic detection of the direction and speed of moving objects in the video,” Contemporary Computing
(IC3), pp. 86–90, 2013.
[17] N. Papenberg, A. Bruhn, T. Brox, S. Didas and J. Weickert, “Highly accurate
optical flow computation with theoretically justified warping,” International
Journal of Computer Vision, 67(2), pp. 141–158, 2006.
[18] J. M. Merlo, J. F. Aguilar, E. Martí-Panameño, R. Cortés, and V. Coello, “Angle
dependence of the interaction distance in the shear force technique,” Review of
Scientific Instruments 82, 083704, 2011.
[19] S. F. Wang, C. J. Lin, C. L. Hsu, H. S. Tsai and C. Y. Liu, “New type small-angle
sensor based on the surface plasmon resonance technology,” in International Instrumentation and Measurement Technology Conference (I2MTC), Singapore, 5–7 May 2009.
[20] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol 17, pp. 185–203, 1981.
[21] D. Kesrarat and V. Patanavijit, “An alternative robust and High Reliability optical flow based on Horn-Schunck Algorithm using median filter and confidence based technique,” Int. Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 9th, pp. 1–4, 2012.
[22] J. F. David and W. Yair, “Optical Flow Estimation,” in Paragios et al. Handbook of Mathematical Models in Computer Vision, Springer, ISBN 0-387-26371-3, 2006.
[23] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of Imaging Understanding Workshop, pages 121–130, 1981.
[24] M. Francesc, C. Marie, G. B. Jovan, “Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards,” in J. Electron Imaging, 20(3) 033004, August 12, 2011. doi:10.1117/1.3606588.
[25] J. H. Lu, M. Liou, “A simple and efficient search algorithm for Block-matching motion estimation,” IEEE Trans. Circuits And Systems For Video Technology 7(2), pp. 429–433.
[26] S. Zhu, K. K. Ma, “A new diamond search algorithm for fast Block-matching motion estimation,” IEEE Trans. Image Processing 9(12), pp. 287–290.
[27] G. Farneback, “Fast and accurate motion estimation using orientation tensors and parametric motion models,” in Proceedings 15th International Conference on Pattern Recognition, vol. 1, pp. 135–139, 2000.
[28] D. Zwillinger, S. Kokoska, “CRC Standard Probability and Statistics Tables and Formulae,” CRC Press. ISBN 978-1-58488-059-2, 2010.
[29] H. Nasrabadi, M. S. Pattichis, P. Fisher, A. N. Nicolaides, M. Griffin, G. C. Makris, E. Kyriacou and C. S. Pattichis, “Measurement of motion of carotid bifurcation plaques,” Bioinformatics & Bioengineering (BIBE), pp. 506–511, 2012.
[30] J. M. Merlo, J. F. Aguilar, E. Martí-Panameño, R. Cortés, and V. Coello, “Angle dependence of the interaction distance in the shear force technique,” Review of Scientific Instruments 82, 083704, 2011.
[31] S. F. Wang, C. J. Lin, C. L. Hsu, H. S. Tsai and C. Y. Liu, “New type small-angle sensor based on the surface plasmon resonance technology,” in (I2MTC) International Instrumentation and Measurement Technology Conference, Singapore, 5-7 May 2009.
[32] L. G. Shapiro and G. C. Stockman, “Computer Vision,” Prentice Hall. pp. 69–73, 2002.
[33] “HSV,” About.com, Desktop Publishing Categories. The page was last modified in
2015. Retrieved June 3, 2015, from About.com: http://desktoppub.about.com/od/glossary/g/HSV .htm
[34] “HSL and HSV,” Wikipedia, the free encyclopedia. The page was last modified on 21 April 2015, at 01:42. Retrieved June 3, 2015, from Wikipedia, the free encyclopedia: http://en.wikipedia.org/wiki/HSL_and_HSV
[35] “RGB to HSV, HSV to RGB Conversion Calculator,” Had2Know. The page was
last modified in 2015. Retrieved June 3, 2015, from Had2Know: http://www.had2know.com/technology/hsv-rgb-conversion-formula-calculator.htm l
[36] “Applying Simple HSV Object Tracking,” Hobbies, Hobbits, and Hobos. The page was last modified on June 16, 2011. Retrieved June 3, 2015, from Hobbies, Hobbits, and Hobos: http://www.hobbitsandhobos.com/2011/06/applying-simple-hsv-object-tracking/
[37] S. Sural, G. Qian and S. Pramanik, “Segmentation and histogram generation using the HSV color space for image retrieval,” International Conference on Image Processing, vol. 2, pp. II-589–II-592, 2002.
[38] R. Brunelli, “Template Matching Techniques,” in Computer Vision: Theory and Practice, Wiley, ISBN 978-0-470-51706-2, 2009.
[39] L. Y. Wei and M. Levoy, “Fast texture synthesis using tree-structured vector quantization,” in Proceeding of the 27th annual conference on Computer graphics and interactive techniques (SIGGRAPH), pp. 479–488, 2000.
[40] M. Ashikhmin, “Synthesizing natural textures,” in Proceedings of the 2001 Symposium on Interactive 3D graphics, pp. 217–226, 2001.
[41] B. Gary and K. Adrian, “Learning OpenCV,” in Computer vision with the OpenCV Library, (2008), O′Reilly Media, Inc.
[42] Q. Jin, S. H. Quan, Y. Shi and Z. H. Xue, “A fast license plate segmentation and recognition method based on the modified template matching,” International Congress on Image and Signal Processing (CISP), pp. 1–6, 2009.
[43] G. Bradski and A. Kaehler, “Learning OpenCV: Computer vision with the OpenCV Library,” 2008, O′Reilly Media, Inc.
[44] M. Abramowitz and I. A. Stegun, “Probability Functions.” Ch. 26 in Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, pp. 925–964, 1972.
[45] M. Evans, N. Hastings and B. Peacock, “Probability Density Function and Probability Function.” in Statistical Distributions, 3rd ed. New York: Wiley, pp. 9– 11, 2000.
[46] G. Doretto, D. Cremers, P. Favaro and S. Soatto, “Dynamic texture segmentation,” in 9th IEEE International Conference on Computer Vision, vol. 2, pp. 1236–1242, 2003.
[47] J. Chen, G. Zhao, M. Salo, E. Rahtu, and M. Pietikäinen, “Automatic dynamic
texture segmentation using local descriptors and optical flow,” in IEEE Trans
Image Process, 22(1), pp. 326–39, 2013.
[48] S. G. Tanyer, “The Cumulative Distribution Function for a finite data set,” IEEE.
Signal Processing and Communications Applications Conference (SIU), pp. 1–3,
2012.
[49] F. J. Lopez-Martinez, D. Morales-Jimenez, E. Martos-Naya and J. F. Paris, “On the
Bivariate Nakagami-m Cumulative Distribution Function: Closed-Form Expression and Applications,” IEEE Transactions on Communications, 61(4). pp. 1404–1414, 2013.
[50] Sha Sha, J. E. Chen Jianer, S. D. Luo, “A fast matching algorithm based on K-degree template,” in 4th International Conference on Computer Science & Education (ICCSE), pp. 1967–1971, 2009.
[51] “F Statistic,” Statistics How To. The page was last modified on September 1, 2015. Retrieved November 27, 2015, from Statistics How To: http://www.statisticshowto.com/f-statistic/
[52] “Statistical Inference for Two Samples,” Applied statistics in Thai. The page was last modified on January 28, 2012. Retrieved November 27, 2015, from Applied statistics in Thai: https://sites.google.com/site/mystatistics01/chapter4/f--test/
[53] “Stat: F-Test,” James Jones, Professor of Mathematics. The page was last modified on August 16, 2015. Retrieved November 27, 2015, from the Web site of James
Jones, Professor of Mathematics: https://people.richland.edu/james/lecture/m170/ch13-f.html
[54] “Comparing Two Population Means: Independent Samples,” Applied Statistics, the Department of Statistics Online Programs, Pennsylvania State University. Retrieved November 27, 2015, from the Web site of Applied Statistics, the Department of Statistics Online Programs: https://onlinecourses.science.psu.edu/stat500/node/50
[55] “Two-Sample t-Test for Equal Means,” Engineering Statistics Handbook, NIST SEMATECH. The page was last modified on October 30, 2013. Retrieved November 27, 2015, from NIST SEMATECH, the Web site of Engineering Statistics Handbook: http://www.itl.nist.gov/div898/handbook/eda/section3/eda353.htm
[56] Michal Benko, “Testing the Equality of Means and Variances across Populations and Implementation in XploRe,” Thesis prepared to obtain Bsc. degree in Statistic. The Humboldt University, Berlin, Germany. March 1, 2001.
指導教授 施國琛(Timothy K. Shih) 審核日期 2016-1-14
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明