English  |  正體中文  |  简体中文  |  Items with full text/Total items : 70585/70585 (100%)
Visitors : 23038003      Online Users : 478
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/83327


    Title: 超解析成像法應用於光學衛星影像之比較;Comparison of Super-Resolution Techniques for Optical Satellite Imagery
    Authors: 胡翔;Hu, Xiang
    Contributors: 太空科學與工程研究所
    Keywords: 超解析;深度學習;神經網路;super-resolution;deep learning;neural network
    Date: 2020-07-28
    Issue Date: 2020-09-02 15:27:26 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 遙測衛星的影像已經被廣泛的應用於生活中,像是農業的物種分析、水質的監測與環境變遷,都對生活帶來非常大的助益。但由於衛星的空間解析度受到限制,在光學衛星影像中,無法觀察出較為細部的地表特徵,因此許多超解析成像 (Super-Resolution imaging, SR) 方法已被發展,可以透過軟體及硬體的方式來提高空間解析度 (spatial resolution)。而傳統超解析成像是建立於插值法 (interoplation) 之上,雖然雙線性插值法、最近相鄰插值法或雙三次插值法能達到超解析的效果,但像素與像素間的的高頻細節卻不能被完整的修復。
    本研究將三種影像特徵提取的超解析成像法,應用於光學衛星影像中並進行比較。稀疏編碼超解析成像法 (Sparse Coding Super-Resolution,SCSR),是基於單幅影像的超解析度重建。影像可以被表示為一個稀疏線性組合和經由影像訓練而來的字典 (dictionary),並且將低解析度中的影像塊(patches) 計算後所得到的稀疏表示係數應用於高解析度中進而重建高解析度影像;卷積神經網路超解析成像法 (Super-Resolution Convolutional Neural Network, SRCNN),則利用滑動視窗 (sliding window) 於不同隱藏層中進行影像的特徵提取,並且改變隱藏層中的權重以建立良好的神經網路來重建高解析度影像;快速卷積神經網路超解析成像法 (Fast Super-Resolution
    Convolutional Neural Network, FSRCNN),進一步將卷積神經網路超解析成像法的神經網路進行修改,以達到更有效率的超解析成像及更高的精準度。
    本實驗利用歐洲太空總署哥白尼計劃下的Sentinel-2 光學衛星影像,進
    行超解析成像的測試,並且分析雙三次插值法、稀疏編碼超解析成像法、卷積神經網路超解析成像法及快速卷積神經網路超解析成像法的重建影像之表現,並且利用均方誤差 (Mean Square Error, MSE) 及峰值訊雜比 (Peak Signal-to-Noise Ratio, PSNR) 來比較不同方法、不同放大倍率和不同地表物覆蓋率的影像。在此研究中,快速卷積神經網路超解析成像法,有較高的峰值訊雜比,且相對於傳統的超解析成像法,有特徵提取超解析成像法在重建影像中有較為顯著的表現。;Satellite imagery has a wild range of applications, such as agriculture, water quality and environmental monitor. Because of the spatial resolution, some details
    cannot be observed in the optical satellite images. Many super-resolution (SR) approaches have been developed to improve the spatial resolution with computer
    software or hardware. But the traditional ways are based on interpolation, such as bilinear interpolation and bicubic interpolation, usually cannot restore the high frequency spatial information.
    In this study, three methods based on feature extraction are applied to the same remote sensing images and their performances evaluation are also conducted.
    The sparse coding super-resolution (SCSR) seeks a sparse representation for each patch of low-resolution (LR) input, and then use the coefficients of the representation to reconstruct the high-resolution (HR) image. On the other hand, super-resolution convolutional neural network (SRCNN) uses sliding windows in different hidden layers to extract the feature of LR, and then updates the previous weights to reconstruct the HR image. Finally, the fast super-resolution convolutional neural network (FSRCNN) improves the computational efficiency
    of SRCNN by modifying the neural network structure.
    The Sentinel-2 optical satellite images are adopted in this experiments. Four methods are implemented for performance evaluation, including the traditional
    bicubic interpolation, SCSR, SRCNN and FSRCNN. The root-mean-square error (RMSE) and peak-signal-to-noise ratio (PSNR) are compared in three aspects, between different methods, scaling factors and land-covers. Preliminary results show the FSRCNN is the best, and all the feature extraction methods are outperform the traditional interpolation approach.
    Appears in Collections:[太空科學研究所 ] 博碩士論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML75View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明