博碩士論文 985202011 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:28 、訪客IP:3.147.6.176
姓名 王培學(Pei-hsueh Wang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 利用GPU加速SIFT特徵之擷取與比對
(Implementation of fast SIFT feature extraction and matching using GPU)
相關論文
★ 使用視位與語音生物特徵作即時線上身分辨識★ 以影像為基礎之SMD包裝料帶對位系統
★ 手持式行動裝置內容偽變造偵測暨刪除內容資料復原的研究★ 基於SIFT演算法進行車牌認證
★ 基於動態線性決策函數之區域圖樣特徵於人臉辨識應用★ 基於GPU的SAR資料庫模擬器:SAR回波訊號與影像資料庫平行化架構 (PASSED)
★ 利用掌紋作個人身份之確認★ 利用色彩統計與鏡頭運鏡方式作視訊索引
★ 利用欄位群聚特徵和四個方向相鄰樹作表格文件分類★ 筆劃特徵用於離線中文字的辨認
★ 利用可調式區塊比對並結合多圖像資訊之影像運動向量估測★ 彩色影像分析及其應用於色彩量化影像搜尋及人臉偵測
★ 中英文名片商標的擷取及辨識★ 利用虛筆資訊特徵作中文簽名確認
★ 基於三角幾何學及顏色特徵作人臉偵測、人臉角度分類與人臉辨識★ 一個以膚色為基礎之互補人臉偵測策略
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) GPU 於1999 年由NVIDIA 所提出來的一個硬體架構,主要用於輔助CPU做快速的影像平行運算,由於GPU 功能之強大,自從產品發表之後,陸續不斷有學者投入研究,針對現有的演算法進行平行化,以讓原本的系統能更有效率的發揮其功效。
本論文實作出一個海關貨櫃的認證系統。系統一開始由兩台在不同海關關口的攝影機拍攝二張貨櫃影像,二張影像即刻透過實作於GPU 上SIFT 演算法分別擷取出特徵點,並透過這些擷取的特徵點找出兩張影像的比對點。這些比對點基本上可計算出描述兩張影像對應關係的單應性矩陣,透過RANSAC 可以從對應關係中挑選出最佳對應的單應性矩陣。接著將其中一張影像透過單應性矩陣轉換後,將影像切割成多個相同的區塊,並利用區域二元特徵來做影像的比對,其比對的方法採用histogram intersection 來判斷兩個區塊是否相同。
實驗結果顯示在GPU 平台實作的特徵點擷取和比對步驟只需不到0.4 秒的時間,比一般英特爾CPU 實作出的結果快上10 倍多的速度。
摘要(英) Graphic processing units (GPU), announced by NVIDIA Co. in 1999, is a specially designed circuit for parallelization. According to the high computational power of GPU, many researchers have devoted to commercial product designs or
academic researches. Recently, image processing algorithms are developed on this platform to improve the performance.
In this thesis, an authentication system for the Customers’ containers is developed. First, two images are captured from two different cameras in two gateways. The SIFT features are efficiently extracted from the parallel operations implemented on a GPU platform. Basically, the features for each image pixel are independently and simultaneously computed. Using the extracted features, the corresponding points between two container images are matched. Given the
corresponding point set, a homographic matrix is found using the RANSAC algorithm. After finding the homographic matrix, the corresponding point relations are constructed. A container image in a gateway is next separated into several
blocks, and the local binary pattern (LBP) features for each block are extracted.
Similarly, the corresponding LBP features for the image captured from the other gateway are also extracted using the found homographic matrix. The similarity for two images is calculated using the histogram intersection to determine if they are the same container or not.
The experimental results demonstrate the performance of feature extraction for image matching on the GPU platform. Less than 0.4 seconds are needed which is 10 times faster than that of the Intel-based CPU.
關鍵字(中) ★ CUDA
★ 區域二元特徵
★ GPU
★ SIFT
關鍵字(英) ★ GPU
★ CUDA
★ SIFT
★ LBP
論文目次 Abstract---I
摘要---II
致謝---III
目錄---V
表格目錄---VI
圖目錄---VII
第1章緒論---1
1-1 研究動機與目的---1
1-2 文獻探討---2
1-3 系統架構---4
1-4 論文架構---7
第2章 CUDA 架構---8
2-1 CUDA程式模組---8
2-2 CUDA硬體架構---10
2-3 CUDA程式策略---12
第3章貨櫃特徵點擷取與比對---14
3-1 分水嶺演算法取出ROI---14
3-2 SIFT - 尺度不變特徵轉換---15
3-3 貨櫃比對---5
3-4 貨櫃影像比對---30
第4章實驗結果---34
4-1 執行速度比較---35
4-2 相關係數閥值和座標點距離閥值訂定---36
4-3 貨櫃影像比對---42
4-4 實驗結論---49
第5章結論與未來工作---51
參考文獻---52
參考文獻 [ 1 ] GPGPU, General-Purpose Computation on Graphics Hardware. [Online]. Available: http://gpgpu.org/, Jun 1,2011.
[ 2 ] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, pp. 91–110, 2004.
[ 3 ] NVIDIA Corporation, Compute Unified Device Architecture Programming Guide. [Online]. Available: http://developer.nvidia.com/category/zone/cuda-zone, Jun 1, 2011.
[ 4 ] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans.
Pattern Anal. and Mach. Intell., vol. 24, no. 7, pp. 971–987, 2002.
[ 5 ] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 1615–1630, Oct. 2005.
[ 6 ] M. Brown, and D.G. Lowe, “Invariant features from interest point groups”, in Proc. of 2002 International Conf. British Machine Vision,pp.656-665, 2002.
[ 7 ] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, “ Comm. of the ACM, vol. 24, no. 6, pp. 381-395, 1981.
[ 8 ] A. Psyllos, C.N. Anagnostopoulos, and E. Kayafas, “Vehicle logo recognition using a SIFT-based enhanced matching scheme”, IEEE Trans. on Intell. Transportation Syst., vol. 11, no. 2, pp. 322-328, June 2010.
[ 9 ] Y. Ke and R. Sukthankar, “PCA-SIFT: a more distinctive representation for local image descriptors,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 511-517, 2004.
[ 10 ] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “SURF: speeded-up robust features,” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359, Jun. 2008.
[ 11 ] L. Juan and O. Gwu, “A comparison of SIFT, PCA-SIFT and SURF,” Int. J. of Image Process., vol. 3, no. 4, pp. 143–152, Oct. 2009
[ 12 ] Q. Zhang, Y. Chen, Y. Zhang, and Y. Xu, “SIFT implementation and optimization for multi-core systems,” Proc. IEEE Int. Symp. Parallel and Distributed Process., pp. 1–8, Apr. 2008.
[ 13 ] Y. Sato, T. Sugimura, H. Noda, Y. Okuno, K. Arimoto, and T. Nagasaki, “Integral-image based implementation of U-SURF algorithm for embedded super parallel processor,” in Proc. Intell. Signal and Commun. Syst., pp. 485-
488, Jan. 2009.
[ 14 ] Y. Sato, K. Muller, A. Smolic, B. Frohlich, and T. Wiegand, “SIFT implementation and optimization for general-purpose GPU,” in Proc. of Int. Conf. in Central Europe on Comput. Graphics, Visualization and Comput. Vision., pp. 317-322, Feb. 2007.
[ 15 ] S. N. Sinha , J. Frahm , M. Pollefeys , and Y. Genc, “GPU-based video feature tracking and matching,” in Workshop on Edge Computing Using New Commodity Architectures (EDGE), vol. 12, pp. 1-15, May. 2006.
[ 16 ] S. Warn, W. Emeneker, J. Cothren, and A. Apon, “Accelerating SIFT on parallel architectures,” in Proc. of 2009 Int. Conf. Cluster Computing and Workshops, pp. 1-4, 2009.
[ 17 ] J. Kim, E. Park, X. Cui, H. Kim, and W. A. Gruver, “A fast feature extraction in object recognition using parallel processing on CPU and GPU,” in Proc. of 2009 Int. Conf. syst., Man and Cybern., pp. 3842, 2009.
[ 18 ] V. Podlozhnyu, Image Convolution with CUDA [Online]. Available: http://www.ieee.org/documents/ieeecitationref.pdf, Jun 1, 2007.
[ 19 ] C. Wu, “SiftGPU: A GPU implementation of scale invariant feature transform (SIFT),” [Online]. Available: http://www.cs.unc.edu/~ccwu/siftgpu/, Jun 1, 2011.
指導教授 范國清(Kuo-chin Fan) 審核日期 2011-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明