中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/98564
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 83776/83776 (100%)
Visitors : 58219183      Online Users : 8099
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: https://ir.lib.ncu.edu.tw/handle/987654321/98564


    Title: 利用內容模擬之紅外光與可見光影像融合模型設 計;Design of an Infrared and Visible Image Fusion Model Using Content Simulation
    Authors: 尤虹惠;Yu, Hong-Hui
    Contributors: 資訊工程學系
    Keywords: 影像融合;深度學習;亮度模擬;融合品質衡量;Image Fusion;Deep Learning;Luminance Simulation;Fusion Quality Assessment
    Date: 2025-08-14
    Issue Date: 2025-10-17 12:55:51 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 紅外光與可見光影像融合旨在於單一畫面中保留此兩種不同畫面來源的互補資訊,其中紅外光影像能突顯畫面中例如人與車輛等熱源目標,而可見光影像則能保留紋理細節。然而,在若干極端環境如場景存在強光或瀰漫著煙霧,現有的影像融合方法常難以維持穩定品質,也缺乏與後端應用或目標任務的連結及相關的畫質評估方式。本研究提出輕量化的紅外光與可見光影像融合模型,結合亮度自適應損失函數,透過生成正負片影像模擬不同亮度條件,引入HDR技術所生成的引導圖,強化模型對紋理細節與光照變化的感知能力。我們所使用的融合重建損失、L1損失、梯度損失與HDR引導損失,能讓模型保留合理的平均亮度條件與清晰輪廓,同時維持模型輕量化的目標,在資源有限的邊緣裝置上具部署潛力。實驗結果於MSRS、LLVIP與TNO三個資料集上進行比較,本方法在紅外光與可見光影像融合任務的需求場景中能較有效保留目標區域,並在多項視覺品質評量中取得良好平衡。我們也提出一項新穎的融合品質評估指標FCE-score,結合YOLOv8預測結果中的關鍵區域,量化融合影像對物件辨識的實際貢獻。此指標不需針對每種方法訓練物件偵測模型,兼顧效率與客觀性,有助於實際應用中的自動化品質監控與融合模型選擇。;Infrared and visible image fusion aims to preserve complementary information from both modalities in a single frame—infrared images emphasize heat-emitting targets such as humans and vehicles, while visible images retain detailed textures. However, under extreme conditions like intense lighting or pervasive smoke, existing fusion methods often struggle with consistent quality and lack alignment with downstream tasks or robust evaluation metrics. This study proposes a lightweight fusion model that incorporates a brightness-adaptive loss function, using positive and negative film image pairs to simulate diverse illumination conditions, along with HDR-derived guidance maps to enhance sensitivity to texture and lighting variations. The combined reconstruction, L1, gradient, and HDR-guided losses help maintain appropriate brightness and clear contours while ensuring low model complexity, making the method suitable for deployment on edge devices. Experiments on the MSRS, LLVIP, and TNO datasets show that the proposed method more effectively preserves target regions and achieves a strong balance across multiple visual quality metrics. Additionally, we introduce a novel evaluation metric, FCE-score, which leverages YOLOv8-predicted key regions to quantify the fused image’s contribution to object recognition without requiring detector retraining, offering both efficiency and objectivity for real-world quality monitoring and model selection.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML8View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明