博碩士論文 110521102 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:4 、訪客IP:3.139.83.27
姓名 林妤臻(Yu-Chen Lin)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 以局部熵亂度分布與模板匹配方法結合自適應ORB特徵提取達成多影像精確拼接
(Local Entropy Randomness Distribution and Template Matching method Combined with Adaptive ORB Feature Extraction to Achieve Multiple Images Accurate Stitching)
相關論文
★ 基於適應性徑向基神經網路與非奇異快速終端滑模控制結合線上延遲估測器應用於二軸機械臂運動軌跡精確控制★ 新型三維光學影像量測系統之設計與控制
★ 新型雙紐線軌跡設計與進階控制實現壓電平台快速與精確定位★ 基於深度座標卷積與自動編碼器給予行人實時路徑及終點位置精確預測
★ 修正式雙紐線軌跡結合自適應積分終端滑動模態控制與逆模型遲滯補償實現壓電平台精確追蹤★ 以粒子群最佳化-倒傳遞類神經網路-比例積分微分控制器和影像金字塔轉換融合方法實現三維光學顯微影像系統
★ 低扭矩機械手臂機構開發與脈寬調變進階控制器設計★ 使用時域門控與梅森增益公式構建四埠夾具的散射參數表徵
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-7-1以後開放)
摘要(中) 影像拼接已被廣泛應用於工業視覺、無人機拍攝、醫學成像等領域,影像拼接目的是創建寬視角的影像,但是在視差和場景照明等條件下容易造成嚴重的偽影。因此,提高影像對齊的精度是影像拼接中的一個重要研究問題。對齊的準確性在於如何從獲得的特徵點找到盡可能多的真實匹配,同時最大限度地減少錯誤匹配,這對於計算機視覺中許多基於特徵匹配的任務至關重要。
基本的影像拼接過程通常包括影像灰階化、特徵檢測、特徵匹配、扭曲、透視投影和影像融合。本論文提出了一種新穎的影像處理技術,結合熵和基於歸一化相關係數的模板匹配來搜索相似區域,熵的使用可以作為亂度分析來量化影像中之訊息量。首先,對影像進行分割並計算每個影像塊的熵值,再使用所有熵的平均值作為閾值。如果影像塊的熵低於閾值,則說明此影像塊為弱紋理區域,可以將此影像塊去除。然後,使用保留的影像塊作為模板並將其與另一影像進行匹配以搜索相似區域。最後,根據相似區域的灰度值分佈,計算ORB的自適應閾值,對相似區域進行特徵提取,以提高特徵匹配和影像對齊的準確性。實驗結果表明,我們的方法具有更高的特徵匹配精度和更好的拼接效果。
摘要(英) Image stitching has been widely employed in industrial vision, aerial photography using unmanned aerial vehicles (UAVs), medical imaging, and other fields. Image stitching aims to create images with a wide viewing angle, but it easily cause severe artifacts under conditions such as parallax and scene lighting. Therefore, improving the accuracy of image alignment is an important research issue in image stitching. The accuracy of alignment lies in how to find as many ground-truth matches as possible from given feature points while minimizing false matches, which is crucial for many feature-matching-based tasks in computer vision.
The basic image stitching process usually includes image grayscale, feature detection, feature matching, warping, perspective projection, and image blending. This thesis proposes a novel image processing technique that combines entropy and normalized correlation coefficient-based template matching to search similar regions of different images. Entropy can be used as a randomness analysis to quantify the amount of information in an image. First, divide the image into blocks and then calculate the entropy value of each image block. Next, we use the average of all the entropy as the threshold value. If the entropy of the image block is lower than the average, it means that the image block is a weak texture area, representing the image block can be removed. Then, use each of the remaining image blocks as templates and match them to another image to search for similar regions. According to the gray value distribution of the similar area, the adaptive threshold value of ORB is calculated, and feature detection is performed in the similar region so as to improve the accuracy of feature matching and image alignment. Experimental results show that our proposed method has higher accuracy in feature matching and better stitching results.
關鍵字(中) ★ 影像拼接
★ 熵
★ 模板匹配
★ 歸一化相關係數
★ 相似區域
★ ORB自適應閥值
關鍵字(英) ★ Image stitching
★ entropy
★ template matching
★ normalized correlation coefficient
★ similar region
★ ORB adaptive threshold value
論文目次 摘 要 i
ABSTRACT ii
誌 謝 iv
Table of content v
List of Figures vii
List of Tables ix
Explanation of Symbols x
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Literature Survey 2
1.2.1 Feature Matching 3
1.2.2 Image Stitching 7
1.3 Contribution 11
1.4 Thesis Organization 12
Chapter 2 Preliminaries 13
2.1 Basic Concept of Image Alignment 13
2.1.1 Direct Method 13
2.1.2 Feature-based Method 15
2.2 Image Processing 16
2.2.1 Entropy 16
2.2.2 Template Matching 18
Chapter 3 Conventional stitching structure based on ORB Algorithm 22
3.1 Feature Point Extraction oriented FAST 23
3.2 Feature Point Description rotated BRIEF 26
3.3 Feature Point Matching 28
3.4 Projective Warp 29
3.5 Image Blending 32
Chapter 4 Image Stitching Strategy 34
4.1 Novel Image Processing 34
4.1.1 Template Matching Based on Normalized Correlation Coefficient 35
4.1.2 Combination of Entropy and Template Matching 37
4.1.3 Result of similar region 46
4.2 Proposed Strategy 50
Chapter 5 Experimental Result 52
5.1 Comparison of Matching Accuracy 52
5.2 Comparison of Stitching Result 57
5.3 Time Cost 62
5.4 Multiple Image Stitching 63
Chapter 6 Conclusions 66
Reference 67
參考文獻 [1] Z. Yang, H. Xinjun, and Y. Yudong, “Design and research of industrial robot control system based on machine vision,” 5th International Conference on Electronics, Communication and Aerospace Technology, Coimbatore, India, pp. 209-212, 2021.
[2] Q. Wan, J. Chen, L. Luo, W. Gong, and L. Wei, “Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 8, pp. 7027-7037, 2021.
[3] Y. Wang and M. Wang, “Research on stitching technique of medical infrared images,” International Conference on Computer Application and System Modeling, Taiyuan, 2010.
[4] H. P. Morevec, “Techniques towards automatic visual obstacle avoidance,” 5th International Joint Conference on Artificial Intelligence, Cambridge, p. 584, 1977.
[5] C. Harris and M. Stephens, “A combined corner and edge detector,” Proceedings of the Alvey Vision Conference, vol. 15, no. 50, pp. 5210-5244, 1988.
[6] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
[7] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: speeded up robust features,” European Conference on Computer Vision, vol. 110, no. 3, pp. 404-417, 2006.
[8] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection” 9th European Conference on Computer Vision, vol. 1, pp. 430–443, 2006.
[9] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: an efficient alternative to sift or surf,” International Journal of Computer Vision, pp. 2564-2571, 2011.
[10] Calonder M, Lepetit V, and Strecha C et al., “Brief: binary robust independent elementary features,” European Conference on Computer Vision, 2010.
[11] E. Mair, G. D. Hager, D. Burschka, M. Suppa, and G. Hirzinger, “Adaptive and generic corner detection based on the accelerated segment test,” 11th European Conference on Computer Vision, pp. 183–196, 2010.
[12] M. Brown and D.G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision, 74, 59–73, 2007.
[13] W. Y. Lin, S. Liu, Y. Matsushita, T. T. Ng, and L. F. Cheong, “Smoothly varying affine stitching,” IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, pp. 345-352, 2011.
[14] J. Zaragoza, T.-J. Chin, Q.-H. Tran, M. S. Brown, and D. Suter, “As projective-as-possible image stitching with moving dlt,” IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, pp. 2339-2346, 2013.
[15] C. H. Chang, Y. Sato, and Y. Y. Chuang, “Shape-preserving half-projective warps for image stitching,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 3254–3261, 2014.
[16] C. C. Lin, S. U. Pankanti, K. N. Ramamurthy, and A. Y. Aravkin, “Adaptive as-natural-as-possible image stitching,” IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, pp. 1155-1163, 2015.
[17] F. Zhang and F. Liu, “Parallax-tolerant image stitching,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 3262–3269, 2014.
[18] P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Transactions on Graphics, pp. 217–236, 1983.
[19] J. Li, Z. Wang, S. Lai, Y. Zhai, and M. Zhang, “Parallax-tolerant image stitching based on robust elastic warping,” IEEE Transactions on Multimedia, vol. 20, no. 7, pp. 1672-1687, 2018.
[20] D.Y. Chen, M. Ouhyoung, “Video vr: a real-time system for automatically constructing panoramic images from video clips”, Proceedings of CAPTECH’98, Geneva, Switzerland, pp.140-143, 1998.
[21] D. N. Bhat and S. K. Nayar, “Ordinal measures for visual correspondence,” Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, pp. 351-357, 1996.
[22] R. Szeliski and S. B. Kang, “Direct methods for visual scene reconstruction,” Proceedings IEEE Workshop on Representation of Visual Scenes, Cambridge, MA, USA, pp. 26-33, 1995.
[23] H. Y. Shum and R. Szeliski, “Construction and refinement of panoramic mosaics with global and local alignment,” Sixth International Conference on Computer Vision, Bombay, India, pp. 953-956, 1998.
[24] I. Zoghlami, O. Faugeras, and R. Deriche, “Using geometric corners to build a 2d mosaic from a set of images,” Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, pp. 420-425, 1997.
[25] A. Bruhn, J. Weickert, C. Feddern, T. Kohlberger, and C. Schnorr, “Variational optical flow computation in real time,” IEEE Transactions on Image Processing, vol. 14, no. 5, pp. 608-615, 2005.
[26] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” European Conference on Computer Vision, 2004.
[27] D. Shan and F. Gao, “The segmentation algorithm of dental CT images based on fuzzy maximum entropy and region growing,” 2010 International Conference on Bioinformatics and Biomedical Technology, Chengdu, China, pp. 74-78, 2010.
[28] Y. Zou, J. Zhang, M. Upadhyay, S. Sun, and T. Jiang, “Automatic image thresholding based on shannon entropy difference and dynamic synergic entropy,” IEEE Access, vol. 8, pp. 171218-171239, 2020.
[29] Y. Tian, Y. Zhang, and L. Li, “Assessment method of fusion image quality based on region entropy," 2021 IEEE 15th International Conference on Electronic Measurement & Instruments, Nanjing, China, pp. 1-4, 2021.
[30] T. D. Pham, “Image texture analysis using geostatistical information entropy,” 6th IEEE International Conference Intelligent Systems, Sofia, Bulgaria, pp. 353-356, 2012.
[31] C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J., vol. 27, p. 379–423/623–656, 1948.
[32] J. Keilson, N. Mermin, and P. Bello, “A theorem on cross correlation between noisy channels,” IRE Transactions on Information Theory, vol. 5, no. 2, pp. 77-79, 1959.
[33] J. P. Lewis, “Fast normalized cross-correlation,” Vision Interface, pp. 120-123, 1995.
[34] J. P. W. Pluim, J. B. A. Maintz, and M. A. Viergever, “Mutual-information-based registration of medical images: a survey,” IEEE Transactions on Medical Imaging, vol. 22, no. 8, pp. 986-1004, 2003.
[35] S. Sahu, G. Adhikari, and R. K. Dey, “Tracking of object with occlusion based on normalized cross correlation and kalman filter estimation,” 2021 2nd International Conference on Range Technology, Chandipur, Balasore, India, pp. 1-5, 2021.
[36] T. Wei, W. Fan, J. Peng, and W. Sur, “Correlation alignment based on sparse matrix transform for unsupervised domain adaptation in hyperspectral Image classification,” IGARSS IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, pp. 2698-2701, 2019.
[37] Y. Zheng and P. Zheng, “Image matching based on harris-affine detectors and translation parameter estimation by phase correlation,” IEEE 4th International Conference on Signal and Image Processing, Wuxi, China, pp. 106-111, 2019.
[38] K. Pearson, “Mathematical contributions to the theory of evolution: III. regression, heredity, and panmixia,” Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 187, 253-318, 1896.
[39] K. Briechle and U. D. Hanebeck. “Template matching using fast normalized cross correlation.” SPIE Defense + Commercial Sensing, 2001.
[40] M. Ajij, D. Sinha Roy, and S. Pratihar, “Plant Leaf Recognition using Geometric Features and Pearson Correlations,” International Conference on Image and Vision Computing New Zealand , Dunedin, New Zealand, pp. 1-6, 2019.
[41] M. Muja and D. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” International Conference on Computer Vision Theory and Applications, 2009.
[42] M. A. Fischler and R. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communication of ACM, vol. 24, no. 6, pp. 381-395, 1981.
[43] R. Szeliski, “Computer vision: algorithms and applications,” Springer-Verlag, Berlin, Heidelberg, 2010.
[44] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transaction of Image Process., vol. 13, no. 4, pp. 600-612, 2004.
指導教授 吳俊緯(Jim-Wei Wu) 審核日期 2023-8-15
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明