博碩士論文 109623013 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:55 、訪客IP:52.15.44.163
姓名 虞景翔(Ching-Hsiang Yu)  查詢紙本館藏   畢業系所 太空科學與工程研究所
論文名稱 非監督式深度學習超解析成像法應用於光學衛星影像
(The Application of Unsupervised Deep-learning-based Super-resolution in Optical Satellite Imagery)
相關論文
★ 利用高光譜影像作異常物偵測★ 利用電腦自動化對數值高程模型作線形偵測
★ 利用多光譜影像的光譜與空間資訊結合數學型態學進行海洋油汙偵測★ 利用遙測影像自動萃取校正點
★ 新的影像融合演算法應用於多光譜遙測影像★ 利用固定式攝影機即時偵測土石流
★ 藉由電腦視覺自動偵測土石流★ 利用多層模型於全波形光達分析樹冠結構
★ 利用MHE對多光譜影像輻射校正並 應用於土石流變遷偵測★ 單發多收合成孔徑雷達模擬與實驗
★ 福爾摩沙五號衛星影像壓縮之實現★ 電磁散射模型於粗糙表面之研究
★ 超寬頻Ka波段於樹之散射量測及研究分析★ 立體影像對自動特徵點提取進行三維重建
★ Comparison of Change Detection Methods Based on the Spatial Chaotic Model for Synthetic Aperture Radar Imagery★ 利用遙測影像偵測碟型天線之方向
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-7-31以後開放)
摘要(中) 超解析度成像法的目的是提升影像的空間解析度,恢復影像次像素中遺失的細節。傳統提升影像解析度的方法多使用線性或非線性內插,運行快速、耗費運算資源低,但在影像成果較模糊,高頻細節的復原並不理想。隨著類神經網路的發展,針對影像處理的卷積神經網路大量運用於超解析度成像法。許多卷積神經網路的超解析度成像法為監督式學習方法,需要大量成對的外部影像集將模型訓練至可提升特定特徵目標如人臉、風景或建物的影像解析度,因此訓練過程運算需求高且耗時長,訓練成果與外部影像集的豐富程度和訓練迭代次數相關。相對而言,非監督超解析度成像法不需要耗費高成本獲得成對的外部影像集來學習影像從低解析度到高解析度的映射,大幅減少計算資源,使非監督學習對未知的新數據有更好的適應性,提高實際應用場景中的可用性和應用性。
本論文針對三種非監督學習方法Zero-shot Super-resolution (ZSSR)、Deep Image Prior (DIP) 及Degradation-aware Super-resolution (DASR),比較其提升Sentinel-2及SPOT衛星影像之空間解析度的效能。實驗結果顯示,非監督學習方法相較於雙三次內插法在峰值訊噪比(Peak Signal-to-Noise Ratio, PSNR)、結構相似性(Structural Similarity, SSIM)及特徵相似性(Feature Similarity, FSIM)有所提升,與耗費訓練資源的監督式學習方法相比仍展現優勢。成果也顯示三種非監督學習方法中,Degradation-aware Super-resolution在光學衛星影像中表現最佳。
摘要(英) The purpose of super-resolution imaging is to improve the spatial resolution of images, restore lost details in the sub-pixel levels. The traditional methods based on linear or nonlinear interpolations are fast and consume fewer computing resources. But the results are usually blur and not suitable for high-frequency details recovery. With the development of artificial intelligent, the convolutional neural networks (CNN) are widely adopted for image super-resolution. Most of CNN-based super-resolution algorithms are supervised learning. They need huge amount of external training image pairs to train the model and improve the image resolution for specific feature such as faces, scenery or buildings. Therefore, they all require high computational power and long training time. The richness of the external image set and the number of training iterations are related to the effectiveness. On the other hand, unsupervised super-resolution algorithms aim to learn the relationship from low-resolution to high-resolution images only from the test image itself. Unsupervised methods do not need to collect paired training data, and can reduce computing resources significantly. This also makes a flexible adaption on new and unseen data, increased accessibility and applicability in real-world scenarios.
In this study, three different unsupervised super-resolution methods, Zero-shot Super-resolution (ZSSR), Deep Image Prior (DIP) and Degradation-aware Super-resolution (DASR), are implemented and applied to improve the spatial resolution of Sentinel-2 and SPOT images. The experimental results show that unsupervised methods can improve Peak Signal-to-noise Ratio (PSNR), Structural Similarity (SSIM) and Feature Similarity (FSIM) compared with bicubic interpolation, also competitively to supervised methods that consume expansive training resources. The results also show that Degradation-aware Super-resolution performs the best among these three unsupervised methods.
關鍵字(中) ★ 衛星影像
★ 卷積神經網路
★ 超解析度成像
★ 非監督學習
關鍵字(英) ★ satellite imagery
★ convolutional neural network
★ super-resolution
★ unsupervised learning
論文目次 摘要 I
Abstract II
Contents III
List of Figures V
List of Tables VII
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Overview 2
1.3 Thesis organization 3
Chapter 2 Literature Reviews 5
2.1 Deep learning for image processing 5
2.2 Supervised super-resolution imaging 8
2.3 Unsupervised super-resolution imaging 10
2.4 Super-resolution in remote sensing 11
Chapter 3 Methodology 14
3.1 Zero-shot super-resolution (ZSSR) 14
3.2 Deep image prior (DIP) 16
3.3 Degradation-aware super-resolution (DASR) 20
3.4 Evaluation of image quality 27
Chapter 4 Research materials 31
4.1 Optical satellite 31
4.1.1 Sentinel-2 satellites 33
4.1.2 SPOT satellites 36
4.2 Dataset 37
4.2.1 Data collection 37
4.2.2 Image processing 38
Chapter 5 Experiment and Discussion 40
5.1 Flow chart 40
5.2 Experimental results 41
5.3 Discussion 50
5.3.1 Comparison of image quality assessment 50
5.3.2 Comparison of visual 53
5.3.3 The suitability of unsupervised super-resolution 58
Chapter 6 Conclusions and Future Works 62
6.1 Conclusions 62
6.2 Future works 63
Reference 65
參考文獻 [1] Z. Wang, J. Chen, and S. C. H. Hoi, "Deep Learning for Image Super-Resolution: A Survey," Ieee T Pattern Anal, vol. 43, no. 10, pp. 3365-3387, 2021.
[2] H. Greenspan, "Super-Resolution in Medical Imaging," (in English), Comput J, vol. 52, no. 1, pp. 43-63, 2009.
[3] J. S. Isaac and R. Kulkarni, "Super resolution techniques for medical image processing," in 2015 International Conference on Technologies for Sustainable Development (ICTSD): IEEE, pp. 1-6.
[4] L. Zhang, H. Zhang, H. Shen, and P. Li, "A super-resolution reconstruction algorithm for surveillance images," Signal Processing, vol. 90, no. 3, pp. 848-859, 2010.
[5] K. Nasrollahi and T. B. Moeslund, "Super-resolution: a comprehensive survey," Machine Vision and Applications, vol. 25, no. 6, pp. 1423-1468, 2014.
[6] S. Vitale and G. Scarpa, "A Detail-Preserving Cross-Scale Learning Strategy for CNN-Based Pansharpening," Remote Sensing, vol. 12, no. 3, p. 348, 2020.
[7] M. Gargiulo, A. Mazza, R. Gaetano, G. Ruello, and G. Scarpa, "Fast Super-Resolution of 20 m Sentinel-2 Bands Using Convolutional Neural Networks," Remote Sensing, vol. 11, p. 2635, 2019.
[8] W. Ma, Z. Pan, F. Yuan, and B. Lei, "Super-Resolution of Remote Sensing Images via a Dense Residual Generative Adversarial Network," Remote. Sens., vol. 11, p. 2578, 2019.
[9] J. Gu, X. Sun, Y. Zhang, K. Fu, and L. Wang, "Deep Residual Squeeze and Excitation Network for Remote Sensing Image Super-Resolution," Remote Sensing, vol. 11, p. 1817, 2019.
[10] A. L. Maas, "Rectifier Nonlinearities Improve Neural Network Acoustic Models," 2013.
[11] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," (in English), P Ieee, vol. 86, no. 11, pp. 2278-2324, 1998.
[12] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning representations by back-propagating errors," Nature, vol. 323, no. 6088, pp. 533-536, 1986.
[13] C. Dong, C. C. Loy, K. M. He, and X. O. Tang, "Image Super-Resolution Using Deep Convolutional Networks," (in English), Ieee T Pattern Anal, vol. 38, no. 2, pp. 295-307, 2016.
[14] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778.
[15] C. Ledig et al., "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105-114.
[16] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, "Enhanced Deep Residual Networks for Single Image Super-Resolution," in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1132-1140.
[17] A. Shocher, N. Cohen, and M. Irani, "Zero-Shot Super-Resolution Using Deep Internal Learning," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3118-3126.
[18] D. Ulyanov, A. Vedaldi, and V. Lempitsky, "Deep Image Prior," (in English), Int J Comput Vision, vol. 128, no. 7, pp. 1867-1888, 2020.
[19] Y. Yuan, S. Liu, J. Zhang, Y. Zhang, C. Dong, and L. Lin, "Unsupervised Image Super-Resolution Using Cycle-in-Cycle Generative Adversarial Networks," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 814-81409.
[20] Y. B. Zhang, S. Y. Liu, C. Dong, X. F. Zhang, and Y. Yuan, "Multiple Cycle-in-Cycle Generative Adversarial Networks for Unsupervised Image Super-Resolution," (in English), Ieee T Image Process, vol. 29, pp. 1101-1112, 2020.
[21] L. Wang et al., "Unsupervised Degradation Representation Learning for Blind Super-Resolution," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10576-10585, 2021.
[22] L. Liebel and M. Körner, "SINGLE-IMAGE SUPER RESOLUTION FOR MULTISPECTRAL REMOTE SENSING DATA USING CONVOLUTIONAL NEURAL NETWORKS," Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., vol. XLI-B3, pp. 883-890, 2016.
[23] H. Song, Q. Liu, G. Wang, R. Hang, and B. Huang, "Spatiotemporal Satellite Image Fusion Using Deep Convolutional Neural Networks," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 3, pp. 821-829, 2018.
[24] J. Shermeyer and A. V. Etten, "The Effects of Super-Resolution on Object Detection Performance in Satellite Imagery," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1432-1441.
[25] J. Kim, J. K. Lee, and K. M. Lee, "Accurate Image Super-Resolution Using Very Deep Convolutional Networks," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646-1654.
[26] Y. Luo, L. Zhou, S. Wang, and Z. Wang, "Video Satellite Imagery Super Resolution via Convolutional Neural Networks," IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 12, pp. 2398-2402, 2017.
[27] F. Dorr, "Satellite Image Multi-Frame Super Resolution Using 3D Wide-Activation Neural Networks," Remote Sensing, vol. 12, p. 3812, 2020.
[28] J. Yu et al., "Wide activation for efficient and accurate image super-resolution," arXiv preprint arXiv:1808.08718, 2018.
[29] M. Märtens, D. Izzo, A. Krzic, and D. Cox, "Super-resolution of PROBA-V images using convolutional neural networks," Astrodynamics, vol. 3, no. 4, pp. 387-402, 2019.
[30] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks," in 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242-2251.
[31] P. Wang, H. Zhang, F. Zhou, and Z. Jiang, "Unsupervised Remote Sensing Image Super-Resolution Using Cycle CNN," in IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 3117-3120.
[32] M. Qin et al., "Achieving Higher Resolution Lake Area from Remote Sensing Images Through an Unsupervised Deep Learning Super-Resolution Method," Remote Sensing, vol. 12, no. 12, p. 1937, 2020.
[33] M. Zontak and M. Irani, "Internal statistics of a single natural image," in CVPR 2011, pp. 977-984.
[34] D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," doi: 10.48550/arXiv.1412.6980.
[35] J. Gu, H. Lu, W. Zuo, and C. Dong, "Blind Super-Resolution With Iterative Kernel Correction," presented at the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[36] A. v. d. Oord, Y. Li, and O. Vinyals, "Representation learning with contrastive predictive coding," doi: 10.48550/arXiv:1807.03748.
[37] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, "Momentum Contrast for Unsupervised Visual Representation Learning," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9726-9735.
[38] D. Martin, C. Fowlkes, D. Tal, and J. Malik, "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics," in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 2, pp. 416-423 vol.2.
[39] L. van der Maaten and G. Hinton, "Viualizing data using t-SNE," Journal of Machine Learning Research, vol. 9, pp. 2579-2605, 2008.
[40] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," (in English), Ieee T Image Process, vol. 13, no. 4, pp. 600-612, 2004.
[41] L. Zhang, L. Zhang, X. Q. Mou, and D. Zhang, "FSIM: A Feature Similarity Index for Image Quality Assessment," (in English), Ieee T Image Process, vol. 20, no. 8, pp. 2378-2386, 2011.
[42] Y. Kang, L. Pan, M. Sun, X. Liu, and Q. Chen, "Destriping high-resolution satellite imagery by improved moment matching," International Journal of Remote Sensing, vol. 38, pp. 6346-6365, 2017.
[43] "Sentinel-2 User Handbook," European Space Agency (ESA), 2015.
[44] "SPOT 6/7 Imagery - User Guide," Airbus Defence and Space Intelligence, France, 2013.
[45] E. Agustsson and R. Timofte, "NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study," in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1122-1131.
[46] R. Timofte et al., "NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results," in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1110-1121.
[47] W. Shi et al., Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. 2016.
[48] L. C. Pickup, Machine learning in multi-frame image super-resolution. 2007.
[49] M. R. Arefin et al., "Multi-Image Super-Resolution for Remote Sensing using Deep Recurrent Networks," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW): IEEE Computer Society, pp. 816-825.
指導教授 任玄(Hsuan Ren) 審核日期 2023-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明