中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/53612
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41776893      線上人數 : 2072
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/53612


    題名: 多重疊近景影像匹配獲取房屋牆面紋理;Acquisition of Building Facade Texture from Close-range Images
    作者: 陳玉鴛;Chen,Yu-yuan
    貢獻者: 土木工程研究所
    關鍵詞: 房屋紋理;近景影像;影像匹配;影像分類;Building Texture;Spectral Analysis;Image Matching;Close-range Images
    日期: 2012-07-21
    上傳時間: 2012-09-11 18:03:39 (UTC+8)
    出版者: 國立中央大學
    摘要: 房屋紋理的敷貼是建立仿真房屋模型中重要的步驟,而隨著數位相機的普及,讓數位影像資料的獲取變的相當便利。針對已存在的積木式房屋模型,本研究利用數位相機獲取大量資訊豐富的高重疊近景影像,目的是利用高重疊影像獲得無遮蔽牆面紋理。但是所獲取的影像資訊通常有前景遮蔽問題,因此,如何獲得無遮蔽牆面紋理是目前需要克服的問題。針對此問題,本研究所提出的處理方法可分為四個部份:(1)方位求解,(2)產生三維點雲,(3)偵測遮蔽區,(4)影像填補。首先需進行相機率定並解算影像方位參數。在影像方位解算乃利用已知之積木模型取得控制點,配合於目標影像間觀測得的共軛點,解算影像的方位參數。第二步,利用影像匹配求得共軛點的三維座標,產生三維點雲。匹配時除了採用多視窗(Center-Left-Right)匹配外,本研究提出使用影像分類作為額外的匹配指標,以提高匹配的正確率。接下來,以影像的光譜及幾何的性質分析影像,以區塊的方式辨別房屋紋理及前景遮蔽物。幾何方面,利用三維點雲所在位置判斷景物的距離;在光譜方面,則以光譜資訊進行分類,使每個像元具有類別。在綜合光譜及幾何條件後,可獲得無前景遮蔽的房屋紋理。最後是影像填補,第三步驟後,我們擁有數張無遮蔽房屋紋理影像,使用副影像對於主影像做填補,以獲得可能完整的房屋牆面紋理影像,但若有任一塊牆面區塊無法被拍攝到,將無法產生完全無遮蔽的房屋牆面紋理。由實驗結果得知,增加影像分類資訊輔助影像匹配,可以得到較高的相關係數。而使用重新取樣及多視窗匹配法,可改善匹配視窗中包含內容不同的部份。最終成果顯示,本方法可以移除前景遮蔽的區域,並由其餘影像填補遮蔽區域,滿足視覺化之要求。Texture mapping for building facades is an important task inphotorealistic building modeling. Following the popularity of digitalcameras, extraction of the object texture becomes convenient. Theclose-range photogrammetry, thus, can acquire rich spatial and textureinformation from high overlap images. Although those close-rangeimages can provide detail information, occlusion is still a problem toovercome. To derive complete textures for a block building model, thisinvestigation proposes a scheme using high overlap images throughgeometrical and spectral analyses.There are four parts in the proposed scheme, namely, (1)orientation modeling, (2) generation of 3D point clouds, (3) occlusiondetection, and (4) image compensation. First step calibrates the cameraand computes orientation parameters. We acquired high overlap imageswith signalized targets for camera calibration. Then, the images wereacquired for test buildings. To examine the applicability of generatingground control points (GCPs) from building models, a small number ofstructure points were extracted. In the second part, we combine CLR(Center-Right-Left) matching and image classification to derive reliableconjugate points for 3D point clouding. Then, geometry and spectrum areanalyzed to separate the foreground from building surfaces. On thegeometry part, space intersection is employed to calculate the objectcoordinates for conjugate points. In the spectrum analyses, imageclassification is performed to determine the class for each pixel.Combining geometry and spectrum characteristics, we detect theforeground objects for removal. The last step is image compensation. Theoccluded regions in a selected master image are then replaced by theunhidden parts extracted from slave images.The result shows higher correctness when CLR matching andimage classification are combined. Experimental results show that theproposed method can detect occlusion part and replaced by the unhiddenparts extracted from other images.
    顯示於類別:[土木工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML383檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明