博碩士論文 105382605 完整後設資料紀錄

DC 欄位 語言
DC.contributor土木工程學系zh_TW
DC.creator喬安娜zh_TW
DC.creatorElisabeth Johanna Dippolden_US
dc.date.accessioned2024-7-22T07:39:07Z
dc.date.available2024-7-22T07:39:07Z
dc.date.issued2024
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=105382605
dc.contributor.department土木工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract此篇論文提出一套改良雙影像運動恢復結構(Structure from Motion, SfM)演算法的影像處理基本架構,以有效地自二維影像中重建三維場景模型。本研究克服了在生成三維點雲過程有效資料不足問題。研究的主要關注分為兩個部分:一方面,通過整體考量各種影像特性以及利用機器學習生成有效的額外資訊以及克服有效資料不足的問題;另一方面,以跨感測器實際應用以及成果的誤差和精度分析驗證所研發方法的有效性。同樣地,資源節省的意圖也常常被低估。整體策略是以個個擊破的方式進行。首先,以所生成的額外資料協助辨識出影像中植披和水體等在三維重建時可能帶來的錯誤匹配的區域。其次,透過改良的方法流程和架構增加辨識出的有效特徵點數量,以提高所產製的三維點雲密度。最後則是根據不同影像切割大小、運算時間、所需計算資源和目標等參數,優化機器學習的模式與訓練。所開發的機器學習模式可用以產製額外的有用資料,以辨識出影像中植被等在以SfM進行三維重建演算過程中容易造成錯誤匹配的區域,改善最後三維重建的成果。而本研究所建立的多重特徵運算子可進一步有效改良以SfM由影像中進行三維重建的效能、改善匹配、節省運算資源並提升點雲的品質。本研究的實驗案例成果證明,本研究所開發的額外影像(波段)生成機器學習可有效地整合進運動結構回復三維重建架構:而本研究所開發的方法流程與處理架構,可以有效率的自二維的影像中重建三維的點雲模型。針對成果的誤差檢驗與分析也顯示,以本論文所提出的方法所產製的三維點雲模型可以獲得良好的精度。整體而言,本研究所開發的三維重建方法與架構,可有效地自二維影像中重建高密度且高精度的三維點雲模型。zh_TW
dc.description.abstractThe aim of this dissertation is improving the process of generating 3D models from a set of 2D images with Structure-from-Motion. The key focus of this study is divided into two parts: on the one hand, overcoming the lack of data by taking various image properties into account and generating additional useful information using machine learning techniques; and on the other hand, cross-sensor applications to test different sensors and the analysis of error properties and sources. Similarly, the intention of saving resources is often underrated too. The paradigm of dividing and conquering is applied to decompose the Structure-from-Motion algorithm and achieve results in steps. Firstly, this approach allows for tackling vegetation and water as possible mismatch error sources of satellite stereo pairs. Secondly, this increases the number of detected features to accumulate a denser point cloud with a clear property profile. The final step is training cross-sensor and resource-aware image-to-image translation with camera and satellite focusing on tiles, time and target. As a result, segmentation removes features classified as nature. Further, utilising multiple Feature Detector Operators with respect to the same origin and towards man-made targets is implemented. Lastly, the translation of RGB2CIR enables RGB-only sensors to use the multispectral information for further processing. This study has successfully proven that segmentation for natural features decreases noise, improves matching, saves resources, and improves point cloud quality. In addition, the utilisation of multiple Feature Detector Operators increases the number features and can improve motion estimation with respect to conditions change. Moreover, RGB only sensors can then be used to segment for vegetation and remove features classified as vegetation within the Structure-from-Motion algorithm. Successful point cloud generation requires sufficient enough features and a minimisation of noise sources. The lack of sensor knowledge, training data and dynamic target properties can restrict the solutions proposed in this study. Overall, the implementation of multiple Feature Detector Operators increases density of the point cloud improves motion estimation and improves the target’s edges and corners. Image translation increases the versatility of sensors and can be implemented into a Structure-from-Motion framework. This study proves that the developed Structure-from-Motion based framework can generate 3D models effectively and efficiently.en_US
DC.subject運動恢復結構zh_TW
DC.subject三維重建zh_TW
DC.subject點雲zh_TW
DC.subject特徵運算子zh_TW
DC.subjectStructure-from-Motionen_US
DC.subject3D Reconstructionen_US
DC.subjectImage-to-Image translationen_US
DC.subjectPoint Clouden_US
DC.subjectNDVIen_US
DC.subjectFeature Detector Operatorsen_US
DC.title基於跨感測器雙視運動結構回復的 有效三維重建方法之研究zh_TW
dc.language.isozh-TWzh-TW
DC.titleCross-Sensor Two-View Structure-from-Motion for Efficient Three-Dimensional Reconstructionen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明