English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41266324      線上人數 : 200
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/94451


    題名: 基於跨感測器雙視運動結構回復的 有效三維重建方法之研究;Cross-Sensor Two-View Structure-from-Motion for Efficient Three-Dimensional Reconstruction
    作者: 喬安娜;Dippold, Elisabeth Johanna
    貢獻者: 土木工程學系
    關鍵詞: 運動恢復結構;三維重建;點雲;特徵運算子;Structure-from-Motion;3D Reconstruction;Image-to-Image translation;Point Cloud;NDVI;Feature Detector Operators
    日期: 2024-07-22
    上傳時間: 2024-10-09 14:44:21 (UTC+8)
    出版者: 國立中央大學
    摘要: 此篇論文提出一套改良雙影像運動恢復結構(Structure from Motion, SfM)演算法的影像處理基本架構,以有效地自二維影像中重建三維場景模型。本研究克服了在生成三維點雲過程有效資料不足問題。研究的主要關注分為兩個部分:一方面,通過整體考量各種影像特性以及利用機器學習生成有效的額外資訊以及克服有效資料不足的問題;另一方面,以跨感測器實際應用以及成果的誤差和精度分析驗證所研發方法的有效性。同樣地,資源節省的意圖也常常被低估。整體策略是以個個擊破的方式進行。首先,以所生成的額外資料協助辨識出影像中植披和水體等在三維重建時可能帶來的錯誤匹配的區域。其次,透過改良的方法流程和架構增加辨識出的有效特徵點數量,以提高所產製的三維點雲密度。最後則是根據不同影像切割大小、運算時間、所需計算資源和目標等參數,優化機器學習的模式與訓練。所開發的機器學習模式可用以產製額外的有用資料,以辨識出影像中植被等在以SfM進行三維重建演算過程中容易造成錯誤匹配的區域,改善最後三維重建的成果。而本研究所建立的多重特徵運算子可進一步有效改良以SfM由影像中進行三維重建的效能、改善匹配、節省運算資源並提升點雲的品質。本研究的實驗案例成果證明,本研究所開發的額外影像(波段)生成機器學習可有效地整合進運動結構回復三維重建架構:而本研究所開發的方法流程與處理架構,可以有效率的自二維的影像中重建三維的點雲模型。針對成果的誤差檢驗與分析也顯示,以本論文所提出的方法所產製的三維點雲模型可以獲得良好的精度。整體而言,本研究所開發的三維重建方法與架構,可有效地自二維影像中重建高密度且高精度的三維點雲模型。;The aim of this dissertation is improving the process of generating 3D models from a set of 2D images with Structure-from-Motion. The key focus of this study is divided into two parts: on the one hand, overcoming the lack of data by taking various image properties into account and generating additional useful information using machine learning techniques; and on the other hand, cross-sensor applications to test different sensors and the analysis of error properties and sources. Similarly, the intention of saving resources is often underrated too. The paradigm of dividing and conquering is applied to decompose the Structure-from-Motion algorithm and achieve results in steps. Firstly, this approach allows for tackling vegetation and water as possible mismatch error sources of satellite stereo pairs. Secondly, this increases the number of detected features to accumulate a denser point cloud with a clear property profile. The final step is training cross-sensor and resource-aware image-to-image translation with camera and satellite focusing on tiles, time and target. As a result, segmentation removes features classified as nature. Further, utilising multiple Feature Detector Operators with respect to the same origin and towards man-made targets is implemented. Lastly, the translation of RGB2CIR enables RGB-only sensors to use the multispectral information for further processing. This study has successfully proven that segmentation for natural features decreases noise, improves matching, saves resources, and improves point cloud quality. In addition, the utilisation of multiple Feature Detector Operators increases the number features and can improve motion estimation with respect to conditions change. Moreover, RGB only sensors can then be used to segment for vegetation and remove features classified as vegetation within the Structure-from-Motion algorithm. Successful point cloud generation requires sufficient enough features and a minimisation of noise sources. The lack of sensor knowledge, training data and dynamic target properties can restrict the solutions proposed in this study. Overall, the implementation of multiple Feature Detector Operators increases density of the point cloud improves motion estimation and improves the target’s edges and corners. Image translation increases the versatility of sensors and can be implemented into a Structure-from-Motion framework. This study proves that the developed Structure-from-Motion based framework can generate 3D models effectively and efficiently.
    顯示於類別:[土木工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML28檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明