深度類神經網路(DNN)在多媒體訊號處理的領域中有不凡的表現,但大部分的基於深度類神經網路的作法都是在處理實數資料,只有少數設計成能處理複數資料,即便複數資料在多媒體的領域佔有重要的地位,因此本論文提出全複數深度遞迴類神經網路(C-DRNN)的架構來處理歌曲人聲分離,本架構可以直接處理短時傅立葉轉換(STFT)出來的複數資料,並且本架構的權重以及激發函數等都是以複數計算。本論文的目標為從歌曲中分離人聲與樂器,在倒傳遞時使用複數微分成本函數,進而得到複數梯度,本架構也對輸出層做了改進,加入了複數比例遮罩以確保最後估計的輸出不會超過輸入的數值,並且在訓練網路時多加了鑑別項以增加網路的計算能力。最後,本論文提出的方法使用MIR-1K資料庫實驗歌曲人聲分離的能力,實驗結果顯示本方法較其他深度類神經網路表現更加優秀。;Deep neural networks (DNN) have performed impressively in the processing of multimedia signals. Most DNN-based approaches were developed to handle real-valued data; very few have been designed for complex-valued data, despite their being essential for processing various types of multimedia signal. Accordingly, this work presents a complex-valued deep recurrent neural network (C-DRNN) for singing voice separation. The C-DRNN operates on the complex-valued short-time discrete Fourier transform (STFT) domain. A key aspect of the C-DRNN is that the activations and weights are complex-valued. The goal herein is to reconstruct the singing voice and the background music from a mixed signal. For error back-propagation, CR-calculus is utilized to calculate the complex-valued gradients of the objective function. To reinforce model regularity, two constraints are incorporated into the cost function of the C-DRNN. The first is an additional masking layer that ensures the sum of separated sources equals the input mixture. The second is a discriminative term that preserves the mutual difference between two separated sources. Finally, the proposed method is evaluated using the MIR-1K dataset and a singing voice separation task. Experimental results demonstrate that the proposed method outperforms the state-of-the-art DNN-based methods.