隨著深度學習的快速發展,音樂源分離模型已取得顯著成果。然而,高品質的分離效果往往伴隨著複雜的模型架構,限制了即時應用的可能性。本研究針對 MMDenseNet 進行改進,提升歌聲分離性能,並開發了一套即時卡拉 OK 系統,能夠從播放的音樂中分離伴奏與人聲。我們將傳統的基頻估計 (Fundamental Frequency Estimation) 演算法融入複數比值遮罩(Complex ratio mask) 的估計,構建了一種整合式的複數遮罩框架。實驗結果顯示,我們的方法在 MUSDB18 數據集上表現優異,優於原始MMDenseNet,並且在所有對比方法中擁有最小的參數量。為了實現即時應 用,我們重新設計模型,使用淺層神經網路學習傳統基頻估計,從而降低計 算負擔。最終系統可在消費級電腦與 NVIDIA Jetson AGX Xavier 等邊緣設備上即時運行,展現出在卡拉 OK 及相關應用中的潛力。;With the fast-paced progress of deep learning, music source separation models have achieved impressive results. However, high separation quality often comes at the cost of complex architectures, limiting real-time performance. This study enhances singing voice separation by improving MMDenseNet and developing a real-time karaoke system that captures and separates accompaniment from played songs. We integrate traditional fundamental frequency (F0) estimation into complex ratio mask estimation, forming an integrated complex mask framework. Our approach outperforms the original MMDenseNet and ranks competitively on MUSDB18 dataset while maintaining the smallest parameter size among compared methods. To enable real-time applications, we redesign the model with a shallow neural network that learns from traditional F0 estimation. The final system runs in real-time on consumer-level PC and edge device such as NVIDIA Jetson AGX Xavier, demonstrating its potential for karaoke and related applications.