中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86336
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 41262260      在线人数 : 187
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86336


    题名: 基於尺度遞迴網路的生成對抗網路之 影像去模糊;Scale-recurrent Network Based Generative Adversarial Network for Image Deblurring
    作者: 許位祥;Hsu, Wei-Hsiang
    贡献者: 通訊工程學系
    关键词: 單影像去模糊;生成對抗網路;尺度遞迴網路;虛擬標籤;single image deblurring;generative adversarial network;scale-recurrent network;pseudo label
    日期: 2021-07-19
    上传时间: 2021-12-07 12:34:39 (UTC+8)
    出版者: 國立中央大學
    摘要: 拍攝時不論是相機或被拍攝物的晃動,都易使拍攝的影像有著運動模糊(motion blur),造成觀賞體驗受嚴重的影響,或是視覺追蹤(visual tracking)和物件偵測(object detection)等效能下降。而現有基於深度學習的方案往往得耗費高網路參數量或記憶體,以換取網路生成高品質的去模糊影像。SRN^+為現有文獻中,網路參數量較低且效果甚佳的基於深度學習之影像去模糊網路方案,因此本論文提出以SRN^+的網路架構作為生成器(generator),並於訓練階段加入以虛擬標籤(pseudo label)輔助之鑑別器(discriminator),提升生成器去模糊影像的品質。和 standard GAN(generative adversarial network)不同,虛擬標籤輔助之生成對抗網路會同時被提供去模糊影像和對應的清晰影像,使鑑別器(discriminator)能給予生成器的優化更準確的損失,提升去模糊影像細節的回復。以漏斗式柔和標籤(funnel soft labelling)代替二元(binary)標籤,降低鑑別器的學習能力,使生成器較不會面臨梯度消失,穩定生成對抗網路的訓練。除此之外,本論文提出對於不同尺度(scale)的損失函數給予不一樣的權重,使網路能對於大尺度階段的去模糊影像之損失,給予更大的權重,並且最大尺度的損失函數以均方誤差(mean squared error, MSE)取代平均絕對誤差(mean absolute error, MAE),使去模糊影像更加的清晰。在測試階段只需使用生成器輸出去模糊影像,因此本論文所提方案的網路參數和計算複雜度皆和SRN^+相同,對於GoPro資料集,峰值訊噪比(peak signal-to-noise ratio, PSNR)比SRN^+高0.51dB,結構相似性(structural similarity index measure, SSIM)高0.005,和現今頂尖方案MPRNet最輕量化的版本1-stage相比,峰值訊噪比高於1dB,網路參數量為MPRNet(1-stage)的7/10。;Camera shake or moving objects causes blurred images. It would lead to the awful visual experience or decrease accuracy of visual tracking and object detection. Existing deep learning based approaches usually requires more network parameters or memory usage to generate high-quality deblurred images. SRN^+ is an existing deep learning based single image deblurring network which has a low amount of network parameters and good performance. Therefore, this paper proposes to adopt SRN^+ as the generator, and input training samples with pseudo labels to the discriminator to improve the quality of deblurred images from the generator at the training stage. Different form standard GAN (generative adversarial network), the proposed generative adversarial network with pseudo labels is provided with the deblurred image and the corresponding sharp image at the same time. Accordingly, the discriminator gives the more accurate loss to guide optimization of the generator to restore details of deblurred images. Use funnel soft labelling instead of binary label to reduce the learning ability of the discriminator, so that the generator will avoid gradient vanishing, and stabilize the training of the generative adversarial network. In addition, this paper proposes to assign different weights to loss functions of different scales, where a larger weight is assigned to the loss of the deblurred image at the large-scale stage. The loss function of the largest scale adopts mean squared error (MSE) instead of mean absolute error (MAE) to make the deblurred image more sharp. At the test stage, the generator generates the deblurred image where the amount of network parameters and computational complexity of the proposed scheme are the same as SRN^+. For the GoPro dataset, the proposed scheme is 0.51dB higher than SRN^+ on the peak signal-to-noise ratio (PSNR), and it is 0.005 higher than SRN^+ on the structural similarity index measure (SSIM). Compared with the lightest version (i.e., 1-stage) of the state-of-the-art deblurring net MPRNet, the proposed scheme is 1dB higher on PSNR.
    显示于类别:[通訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML101检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明