English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 83696/83696 (100%)
造訪人次 : 56618133      線上人數 : 8420
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/98482


    題名: 以半自適應尺度不變特徵變換應用於建構精確的光學顯微鏡三維影像;Application of Semi-Adaptive Scale-Invariant Feature Transform for Constructing Accurate 3D Images in Optical Microscopy
    作者: 廖于綸;Liao, Yu-Lun
    貢獻者: 電機工程學系
    關鍵詞: 多視角三維光學顯微成像系統;弱紋理區域;多焦點影像融合;修正式拉普拉斯;半自適應尺度不變特徵變換;運動恢復結構;Multi-view 3D optical microscopy imaging system (3DOMIS);multi-focus image fusion (MFIF);weak texture regions;modified Laplacian (ML);semi-adaptive scale-invariant feature transform (SASIFT);structure from motion (SfM)
    日期: 2025-08-07
    上傳時間: 2025-10-17 12:50:13 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著微觀尺度檢測技術的發展,光學顯微鏡的三維影像重建已逐漸成為醫學細胞結構分析、材料樣本缺陷檢測等領域的重要研究方向之一。然而,微觀影像與傳統的巨觀影像三維重建相比,微觀影像在進行特徵點提取與匹配時會面臨諸多挑戰,其中最大的挑戰是樣本表面因光學顯微鏡解析度限制造成影像紋理不足之問題。一般,三維影像重建的第一步,需從多張二維影像中擷取並匹配出對應的特徵點,進而計算其在三維空間中的對應座標,接著重建出三維點雲模型。然而,光學顯微鏡受限於物理繞射極限問題,會導致影像細節模糊不清,使其嚴重影響特徵點的穩定性與數量對,進而限制了後續三維影像重建的完整性與精確性。
    近年來,雖然電子顯微鏡與原子力顯微鏡等高解析度技術逐漸成為主流,但其設備造價昂貴,且由於電子顯微鏡為破壞性檢測與原子力顯微鏡成像速度緩慢之限制,使它們難以應用於生物醫學領域上(例如:活體組織三維動態觀察等)。如何在可接受的精度與成本之間取得平衡,是當前一個重要議題,因此光學顯微鏡的三維重建技術的開發,仍具備不可取代的研究價值。本研究針對現有光學顯微鏡無法有效進行多視角三維重建的問題,提出一套適用於微觀尺度的影像三維重建流程,結合多焦點以及多視角採樣、多焦點影像融合、影像預處理與特徵點提取優化策略,整體流程能有效提升弱紋理區域的特徵穩定性與偵測能力。
    首先,本研究設計了一套多焦點與多視角拍攝之三維光學顯微成像系統,搭配多焦點影像融合技術,藉此來克服光學顯微鏡景深有限之限制。接著在影像預處理階段中,利用導向濾波進行去噪,再應用對比受限自適應直方圖均衡化來增強影像細節與對比,使其來提升後續特徵點偵測的效果。針對傳統尺度不變特徵變換特徵提取器在弱紋理區域無法穩定提取關鍵點的問題,本研究提出名為半自適應尺度不變特徵變換的改良方法,結合修正式拉普拉斯算子來計算區域紋理強度,並依據該強度分布圖對原始影像與模糊影像進行半自適應融合,使不同區域具有不同程度的模糊處理,藉此來保留弱紋理區域的關鍵資訊,同時也可以避免抑制強紋理特徵。
    接著在實驗驗證部分,我們首先驗證半自適應尺度不變特徵變換在不同紋理條件下的特徵點偵測效果,再與傳統的尺度不變特徵變換進行匹配準確率與匹配數量比較。最後結果顯示,本研究所提出的方法能有效提升整體正確匹配的數量,並進一步應用於運動恢復結構之方法架構下的三維點雲重建,成功於原先無法建立模型的微觀樣本上生成稀疏點雲與初步網格模型,並透過樣本實驗來驗證其重建輪廓上的可信度。
    ;With the advancement of microscopic observation technologies, the three-dimensional (3D) reconstruction of optical microscope images has become an increasingly important research topic in fields such as cellular analysis for medicine and defect detection in materials science. However, compared to conventional macroscopic 3D reconstruction, microscopic images present several unique challenges, especially in feature point extraction and matching, mainly due to low resolution, which leads to rarely surface texture. The first step of 3D reconstruction involves detecting and matching corresponding keypoints from multiple 2D images, then estimating their coordinates in 3D space to progressively reconstruct a point cloud model. However, due to the diffraction limit inherent to optical microscopes, fine structural details are often blurred, significantly reducing the stability and number of keypoints and consequently limiting the completeness and accuracy of the final reconstruction results.
    In recent years, although high-resolution technologies such as electron microscopy and atomic force microscopy have become mainstream, their high costs and due to the limitations of electron microscopes for destructive detection and the slow imaging speed of atomic force microscopes (such as in vivo 3D tissue observation), have limited their use in certain applications. Therefore, striking a balance between resolution and cost, 3D reconstruction based on optical microscopy remains highly valuable and irreplaceable in many domains. To address the difficulty of conducting multi-view 3D reconstruction with traditional optical microscopes, this study proposes a complete 3D image reconstruction framework tailored for the microscopic scale, integrating multi-focus and multi-view 2D image sampling, multi-focus image fusion (MFIF), image preprocessing, and keypoint detection enhancement strategies. This end-to-end process improves the robustness and detectability of keypoints in weakly textured regions.
    We first design a multi-view 3D optical microscopy imaging system (3DOMIS), including multi-focus and multi-view sampling, incorporating MFIF techniques to overcome the shallow depth-of-field (DOF) inherent in optical microscopy. In the preprocessing stage, guided filtering is used for smoothing, followed by contrast-limited adaptive histogram equalization (CLAHE) to enhance local contrast and fine details, thereby improving keypoint detection. To address the conventional scale-invariant feature transform (SIFT) algorithm’s limitations in weakly textured regions, we propose a novel method named semi-adaptive scale-invariant feature transform (SASIFT), which leverages a modified Laplacian operator to measure regional texture strength and adaptively blend the original and blurred images. This semi-adaptive fusion enables spatially varying levels of blurring, preserving critical information in low-texture areas while maintaining feature stability in highly textured regions.
    In Experiments, we first validate the performance of SASIFT in detecting keypoints under different texture conditions and compare its matching precision and quantity with traditional SIFT. The results demonstrate that our method can significantly improve the number of correct matches. Furthermore, by applying these matches within a structure-from-motion (SfM) pipeline, we successfully reconstruct sparse point clouds and initial mesh models of microscopic samples that were previously unreconstructable and experimentally validate their reliability in reconstructing object silhouettes. These findings confirm the feasibility and scalability of the proposed method. Future work may incorporate dense reconstruction and mesh refinement to further enhance surface smoothness and structural fidelity of the final 3D models.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML4檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明