中文摘要 本論文探討非負矩陣分解方法及其延伸,並分別探討其在影像處理及音訊處理上之應用。我們的貢獻主要有二,其ㄧ為發展基於曼哈頓距離(Manhattan Distance)距離之非負矩陣分解方法,並利用超像素(Superpixel)當做特徵參數,來解決彩色影像影像切割(Image Segmentation)問題。其二為提出結合空間發散正規項與稀疏限制式之非負矩陣方法(SpaSNMF),並將其應用在音訊分離(Source Separation)上。我們針對非負舉陣分解之基底(Basis),設計可以保留輸入資料空間結構的限制式。透過此限制式,我們可以使得時頻圖之基底各元素間較不發散(Dispersion)。此外,我們也額外加入群組稀疏(Group Sparse)限制式,藉此提升音訊分離之效果.。最後,我們也將提出之SpaSNMF應用於影像聚類問題上,探討其效果。 ;Abstract In this dissertation, we proposed new approaches for extension nonnegative matrix factorization (NMF) that are specifically suited for analyzing image and musical signals. First, we give an overview of NMF on definitions, algorithms and discuss about sparseness, graph and spatial constraint that added factorization of signals. We developed a novel segmentation method for color image segmentation based on superpixels as new feature representation method before formulating the segmentation problem as a multiple Manhattan nonnegative matrix factorization. Second, we developed a sparse regularized nonnegative matrix factorization scheme with spatial dispersion penalty (SpaSNMF). This is a new dictionary learning method that utilized beta divergence to measure error construction and preserves distant repulsion properties to obtain the compact bases simultaneously. To improve the separation performance, group sparse penalties are simultaneously constructed. A multiple-update-rule optimization scheme was used to solve the objective function of the proposed SpaSNMF. Experiments on single-channel source separation reveal that the proposed method provides more robust basis factors and achieves better results than standard NMF and its extensions. Besides, the effectiveness of spectrogram dispersion penalty on dictionary learning was considered on this thesis. Analyzing experimental results show the good ability of spectrogram dispersion penalty NMF on dictionary learning in comparisons with NNDSVD, PCA, NMF, GNMF,SNMF,GSNMF. Finally, we study another approach of NMF for image clustering which extend the original NMF by employing pixel dispersion penalty, sparseness constraints with l2 norm and graph regularize to construct new objective function.