中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/80993
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78818/78818 (100%)
Visitors : 34717315      Online Users : 919
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/80993


    Title: 基於全卷積網路之光場影像反射分離;Fully Convolutional Networks Based Reflection Separation for Light Field Images
    Authors: 張瑞宇;Chang, Ruei-Yu
    Contributors: 通訊工程學系
    Keywords: 光場影像;混合影像;反射分離;深度學習;全卷積網路
    Date: 2019-07-30
    Issue Date: 2019-09-03 15:24:31 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 現有基於多視角的反射分離(reflection separation)方案,因其並非針對窄基線(narrow baseline)而設計,並不適用於光場影像,又目前僅有的少數光場影像的反射分離方案,皆須先對中心視角估測視差圖(disparity map)。因此,本論文採用基於不含反射之光場影像視差估測所設計之EPINET,對弱反射之混合光場影像進行反射分離,透過訓練以多視角影像堆疊(image stack)為輸入的全卷積網路,以端對端(end-to-end)的方式學習主要方向的多視角背景層之主要卷積特徵,合併後再以全卷積網路,對中央視角直接進行以像素為單位的背景層影像估測。此外,本論文分析光場相機拍攝真實世界中的混合光場影像,其背景層與反射層於各視角間會有不同的位移量,基於此現象建構出滿足真實條件之混合光場影像資料集。實驗結果顯示,採用EPINET輔以本論文提出之混合光場影像資料集,能有效地對合成之混合光場影像重建背景層。;Existing reflection separation schemes designed for multi-view images cannot be applied to light filed images due to the dense light fields with narrow baselines. In order to improve accuracy of the reconstructed background (i.e., the transmitted layer), most light field data based reflection separation schemes estimate a disparity map before reflection separation. Different from previous work, this thesis uses the existing EPINET based on disparity estimation of light field image without reflection, and separates the mixed light field image of weak reflection. At the training stage, the network takes multi-view images stacks along principle directions of light field data as inputs, and significant convolution features of the background layer are learned in an end-to-end manner. Then the FCN learns to predict pixel-wise gray-scale values of the background layer of the central view. Our experimental results show that the background layer can be reconstructed effectively by using EPINET and the mixed light field image dataset proposed in this thesis.
    Appears in Collections:[Graduate Institute of Communication Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML269View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明