深層類神經網路(Deep neural network, DNN)目前已成為處理訊號源分離問題之熱門方法。其中,幾乎所有以DNN為基礎之分離方法皆只用混合訊號頻譜之能量(Magnitude)做為網路訓練資料,而忽略了相位(Phase)這個隱含在短時傅立葉轉換(STFT)係數中之重要資訊。然而,最近的研究表明,加入相位資訊可以提升分離訊號的聽覺品質。故而在本論文中,我們在進行分離時保留頻譜之相位資訊,從輸入混合訊號中估算目標來源訊號之STFT係數,並視其為一複數域回歸問題。我們發展複數深層類神經網路(Complex-valued Deep neural network),來學習混合訊號之STFT係數到來源訊號之STFT係數間的非線性映射,做法是利用STFT將混合訊號轉至時頻域後,將其複數STFT係數輸入複數深層類神經網路中,藉此同時考慮能量與相位資訊。此外本論文也提出在成本函數部分加入具有重建性及稀疏性限制式,以提升訊號分離效果。在實驗上,我們將所提出的方法分別應用於語音分離和歌唱分離中。;Deep neural networks (DNNs) have become a popular means of separating a target source from a mixed signal. Almost all DNN-based methods modify only the magnitude spectrum of the mixture. The phase spectrum is left unchanged, which is inherent in the short-time Fourier transform (STFT) coefficients of the input signal. However, recent studies have revealed that incorporating phase information can improve the perceptual quality of separated sources. Accordingly, in this paper, estimating the STFT coefficients of target sources from an input mixture is regarded a regression problem. A fully complex-valued deep neural network is developed herein to learn the nonlinear mapping from complex-valued STFT coefficients of a mixture to sources. The proposed method is applied to speech separation and singing separation.