現有基於多視角的反射分離(reflection separation)方案,因其並非針對窄基線(narrow baseline)而設計,並不適用於光場影像,又目前僅有的少數光場影像的反射分離方案,皆須先對中心視角估測視差圖(disparity map)。因此,本論文採用基於不含反射之光場影像視差估測所設計之EPINET,對弱反射之混合光場影像進行反射分離,透過訓練以多視角影像堆疊(image stack)為輸入的全卷積網路,以端對端(end-to-end)的方式學習主要方向的多視角背景層之主要卷積特徵,合併後再以全卷積網路,對中央視角直接進行以像素為單位的背景層影像估測。此外,本論文分析光場相機拍攝真實世界中的混合光場影像,其背景層與反射層於各視角間會有不同的位移量,基於此現象建構出滿足真實條件之混合光場影像資料集。實驗結果顯示,採用EPINET輔以本論文提出之混合光場影像資料集,能有效地對合成之混合光場影像重建背景層。;Existing reflection separation schemes designed for multi-view images cannot be applied to light filed images due to the dense light fields with narrow baselines. In order to improve accuracy of the reconstructed background (i.e., the transmitted layer), most light field data based reflection separation schemes estimate a disparity map before reflection separation. Different from previous work, this thesis uses the existing EPINET based on disparity estimation of light field image without reflection, and separates the mixed light field image of weak reflection. At the training stage, the network takes multi-view images stacks along principle directions of light field data as inputs, and significant convolution features of the background layer are learned in an end-to-end manner. Then the FCN learns to predict pixel-wise gray-scale values of the background layer of the central view. Our experimental results show that the background layer can be reconstructed effectively by using EPINET and the mixed light field image dataset proposed in this thesis.