English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 40145482      線上人數 : 108
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93341


    題名: 運用正負影像進行監督式訓練以實現紅外光與可見光之畫面亮度自適應融合;Using Positive and Negative Images for Supervised Training to Achieve Luminance-Adaptive Fusion of Infrared and Visible Light Images
    作者: 葉凌瑋;Yeh, Ling-Wei
    貢獻者: 資訊工程學系
    關鍵詞: 影像融合;深度學習;卷積神經網路;監督式學習;亮度自適應;Image Fusion;Deep Learning;Convolution Neural Network;Supervised Learning;Luminance Self-Adaptive
    日期: 2023-07-27
    上傳時間: 2024-09-19 16:54:23 (UTC+8)
    出版者: 國立中央大學
    摘要: 紅外光與可見光影像融合的目的是將同一場景之不同光譜影像資訊保留於單一畫面中。然而,兩輸入圖的較大亮度差異可能導致彼此內容干擾,影響融合影像的互補資訊呈現。現存的影像融合方法通常對於較低亮度影像有較佳的效果,但是當其中一張影像具有較高亮度內容時,我們發現融合影像的紋理對比度經常出現降低的情況。為了避免極高亮度和極低亮度影像所引起的融合效果不理想,我們提出新的深度學習模型訓練方法,運用正負影像進行自監督式訓練。考量畫面內容的邊緣是融合影像的呈現重點,我們計算影像梯度提取紋理以保留原圖細節供監督式學習參考。畫面中不同區域的紋理協助產生訓練融合影像的引導圖,我們另也運用邊緣增強做為融合影像梯度的參考以降低極端亮度對於畫面細節的影響。我們引入通道注意力模塊來針對特徵圖中不同通道進行強化或弱化,並加速模型訓練。監督式訓練計算正負影像與引導圖間的相似度、負融合影像反轉後與正融合影像間的相似度,以及融合影像梯度的相似度,盡可能保留畫面細節,並實現亮度自適應的影像融合目標。實驗結果證明我們所提出的方法能夠取得良好的成效。;The main task of fusing infrared and visible light images is to preserve the spectral information of the same scene in a single frame. However, the extreme brightness differences between the two input images can lead to content interference and affect the presentation of complementary information in the fused image. Existing image fusion methods often perform well for lower-brightness images, but when one of the images contains high-brightness content, we observed a decrease in texture contrast in the fused image. To overcome the issue of poor fusion results caused by extremely high and low brightness images, we propose a new training method that utilizes self-supervised learning with positive and negative images in a deep learning neural network. We extract image gradients to generate ground truth that preserves the details from the original images for reference in supervised learning. Additionally, we use edge enhancement as the ground truth for the gradients of the fused image to mitigate the adverse effects of brightness on preserving fused details. We also introduce a channel attention module to enhance or weaken different channels in the feature maps. The training process measures the similarity between the positive and negative images and the designed ground truth, as well as the similarity between the inverted negative fused image and the positive fused image. This encourages the deep learning network to preserve detailed features and achieve Luminance-adaptive image fusion. Experimental results demonstrate the effectiveness of our proposed method, confirming that the generation of ground truth can guide the preservation of information in the fusion of infrared and visible light images.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML3檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明