博碩士論文 103522606 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:31 、訪客IP:3.131.110.169
姓名 範白松(PHAM BACH TUNG)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 新穎的非負矩陣分解法及其應用
(New Approaches on Nonnegative Matrix Factorization and Their Applications)
相關論文
★ Single and Multi-Label Environmental Sound Recognition with Gaussian Process★ 波束形成與音訊前處理之嵌入式系統實現
★ 語音合成及語者轉換之應用與設計★ 基於語意之輿情分析系統
★ 高品質口述系統之設計與應用★ 深度學習及加速強健特徵之CT影像跟骨骨折辨識及偵測
★ 基於風格向量空間之個性化協同過濾服裝推薦系統★ RetinaNet應用於人臉偵測
★ 金融商品走勢預測★ 整合深度學習方法預測年齡以及衰老基因之研究
★ 漢語之端到端語音合成研究★ 基於 ARM 架構上的 ORB-SLAM2 的應用與改進
★ 基於深度學習之指數股票型基金趨勢預測★ 探討財經新聞與金融趨勢的相關性
★ 基於卷積神經網路的情緒語音分析★ 運用深度學習方法預測阿茲海默症惡化與腦中風手術存活
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 中文摘要
本論文探討非負矩陣分解方法及其延伸,並分別探討其在影像處理及音訊處理上之應用。我們的貢獻主要有二,其ㄧ為發展基於曼哈頓距離(Manhattan Distance)距離之非負矩陣分解方法,並利用超像素(Superpixel)當做特徵參數,來解決彩色影像影像切割(Image Segmentation)問題。其二為提出結合空間發散正規項與稀疏限制式之非負矩陣方法(SpaSNMF),並將其應用在音訊分離(Source Separation)上。我們針對非負舉陣分解之基底(Basis),設計可以保留輸入資料空間結構的限制式。透過此限制式,我們可以使得時頻圖之基底各元素間較不發散(Dispersion)。此外,我們也額外加入群組稀疏(Group Sparse)限制式,藉此提升音訊分離之效果.。最後,我們也將提出之SpaSNMF應用於影像聚類問題上,探討其效果。
摘要(英) Abstract
In this dissertation, we proposed new approaches for extension nonnegative matrix factorization (NMF) that are specifically suited for analyzing image and musical signals. First, we give an overview of NMF on definitions, algorithms and discuss about sparseness, graph and spatial constraint that added factorization of signals. We developed a novel segmentation method for color image segmentation based on superpixels as new feature representation method before formulating the segmentation problem as a multiple Manhattan nonnegative matrix factorization.
Second, we developed a sparse regularized nonnegative matrix factorization scheme with spatial dispersion penalty (SpaSNMF). This is a new dictionary learning method that utilized beta divergence to measure error construction and preserves distant repulsion properties to obtain the compact bases simultaneously. To improve the separation performance, group sparse penalties are simultaneously constructed. A multiple-update-rule optimization scheme was used to solve the objective function of the proposed SpaSNMF. Experiments on single-channel source separation reveal that the proposed method provides more robust basis factors and achieves better results than standard NMF and its extensions. Besides, the effectiveness of spectrogram dispersion penalty on dictionary learning was considered on this thesis. Analyzing experimental results show the good ability of spectrogram dispersion penalty NMF on dictionary learning in comparisons with NNDSVD, PCA, NMF, GNMF,SNMF,GSNMF.
Finally, we study another approach of NMF for image clustering which extend the original NMF by employing pixel dispersion penalty, sparseness constraints with l2 norm and graph regularize to construct new objective function.
關鍵字(中) ★ 新穎的非負矩陣分解法及其應用 關鍵字(英) ★ Nonnegative Matrix Factorization and Applications
論文目次 Chapter 1 Introduction. 14
1.1 Motivation. 14
1.2 Aim and Object. 15
1.3 Thesis overview 16
Chapter 2 Nonnegative Matrix Factorization 17
2.1 Introduction. 17
2.2 NMF algorithms 18
2.2.1 Multiplicative Update Algorithms 19
2.2.2 Gradient Descent Algorithms . 20
2.2.3 Alternative Least Square Algorithms . 21
2.3 NMF with additional constraints or regularizations. 22
2.3.1 Smoothness constraints. 22
2.3.2 Sparsity constraints. 23
2.3.3 Graph regularization . 23
2.3.4 Spatial constraints (pixel dispersion penalty). 24
Chapter 3 NMF-Based Image Segmentation. 27
3.1 Image Segmentation 27
3.2 Superpixels 27
3.2.2 Simple linear iterative clustering (SLIC) 28
5
3.2.2 SLIC Algorithm 29
1. Algorithm 30
2. Distance measurement 31
3. Algorithm Complexity 32
3.2.3 Superpixel’s Features 33
3.3 Image segmentation base on NMF . 34
3.3.1 Manhattan NMF (MahNMF) 34
1. Object function 34
2. Manhattan Distance 34
3.3.2 Estimate The number Segmented Groups 35
3.3.3 Group Superpixels 35
3.4 Experiments 36
Chapter 4 Spatial Dispersion Constrained NMF for Monaural Source
Separation . 39
4.1 Sound source separation . 39
4.2 NMF with Beta- Divergence. 40
4.3 Proposed Method 41
4.3.1 Constraint for NMF. 41
1. Group Sparsity 41
2. Spectrogram dispersion penalty 42
4.3.2 SpaSNMF and majorization-minimization algorithm. 43
6
1. Algorithm 43
2. Gradient with normalized parameters. 44
3. Multiplicative update rule. 45
4.4 SpaSNMF for sound separation 46
4.5 Experiment 47
4.5.1 Databases and experiment setting. 47
4.5.2 Performance and comparison 48
Chapter 5 The effectiveness of spectrogram dispersion penalty on learning
dictionary 51
5.1 Dictionary learning and Source separation 51
5.2 NMF with Beta – Divergence. 51
5.3 General function and majorization-minimization algorithm 53
5.4 Experiment 54
5.4.1 Databases and experiment setting. 54
5.4.2 Performance and comparison 55
Chapter 6 Sparse Graph regularized NMF with Spatial Non-negative Matrix
Factorization For Clustering . 57
6.1 Introduction. 57
6.1.1 Data Clustering . 57
6.1.2 Previous work . 57
6.1.3 Proposed method. 58
7
6.2 Algorithm 58
6.2.3 Sparsity constraint. 58
6.2.4 Algorithm 58
6.3 Algorithm Optimization 59
6.3.1 Projected gradient method for bound-constrained optimization 59
6.3.2 Projected gradient method for bound-constrained optimization 59
6.3.3 Stopping condition 60
6.4 Experiment 62
6.4.1 Data and Evaluation Metrics. 62
6.4.2 Experiment setting and clustering results. 63
6.4.3 Performance of proposed method. 64
Chapter 7 Conclusions and future work 66
Bibliographies . 68
參考文獻 Bibliographies
[1] G.H. Golub and C. Reinsch. “Singular value decomposition and least squares solutions”. Numerische Mathematik, 14(5):403–420, 1970.
[2] I. Jolliffe. “Principal component analysis”. Wiley Online Library, 2005.
[3] D.D. Lee and H.S. Seung. “Learning the parts of objects by nonnegative matrix factorization”. Nature, 401(6755):788–791, 1999.
[4] I. Jung, J. Lee, S. Lee, and D. Kim. “Application of nonnegative matrix factorization to improve profile-profile alignment features for fold recognition and remote homolog detection”. BMC Bioinformatics, 9(1):298, 2008.
[5] W. Xu, X. Liu, and Y. Gong. “Document clustering based on non-negative matrix factorization”. In Proceedings of ACM SIGIR Conference on Research and Development in Informaion Retrieval, pages 267–273. ACM, 2003.
[6] P. Smaragdis and J.C. Brown. “Non-negative matrix factorization for polyphonic music transcription”. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 177–180. IEEE, 2003.
[7] C. Févotte, N. Bertin, and J. Durrieu. “Nonnegative matrix factorization with the itakura-saito divergence: With application to music analysis”. Neural Computation, 21(3):793–830, 2009.
[8] J. Brunet, P. Tamayo, T.R. Golub, and J.P. Mesirov. “Metagenes and molecular pattern discovery using matrix factorization”. Proceedings of the National Academy of Sciences, 101(12):4164–4169, 2004.
[9] T. Li and C. Ding. “The relationships among various nonnegative matrix factorization methods for clustering”. In Proceedings of International Conference on Data Mining (ICDM), pages 362–371. IEEE, 2006.
[10] C. Ding, X. He, and H.D. Simon. “On the equivalence of nonnegative matrix factorization and spectral clustering”. In Proceedings of SIAM International Conference on Data Mining (SDM), pages 606–610, 2005.
[11] Paatero, P. and Tapper, U., “Positive matrix factorization: a non-negative factor model with optimal utilization of error estimates of data values,” Environ-metrics, vol. 5, pp. 111–126, 1994.
[12] Gao, Y. and Church, G., “Improving molecular cancer class discovery through sparse non-negative matrix factorization,” Bioinformatics, vol. 21, no. 21, pp. 3970–3975, 2005.
[13] Kim, H. and Park, H., “Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis,” Bioinformatics, vol. 23, no. 12, pp. 1495–1502, 2007.
[14] Cichocki, A. and Phan, A. H., “Fast local algorithms for large scale nonnegative matrix and tensor factorizations,” IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, vol. E92A, no. 3, pp. 708–721, 2009.
[15] Virtanen, T., “Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria,” IEEE Trans. On Audio, Speech, and Language Processing, vol. 15, no. 3, pp. 1066–1074, 2007.
[16] Zhang, S., Wang, W., Ford, J., and Makedon, F., “Learning from incomplete ratings using non-negative matrix factorization,” in SDM ’06: Proc. of SIAM Int. Conf. on Data Mining, pp. 548–552, 2006.
[17] D.D. Lee, H.S. Seung, “Algorithms for non-negative matrix factorization”, in: Advances in Neural Information Processing Systems, 2000.
[18] C. Lin. “Projected gradient methods for nonnegative matrix factorization”. Neural Computation, 19(10):2756–2779, 2007.
[19] V.P. Pauca, J. Piper, and R.J. Plemmons. “Nonnegative matrix factorization for spectral data analysis”. Linear Algebra and Its Applications, 416(1):29–47, 2006.
[20] W. S. Zheng, J. H. Lai, S. Liao, and R. He, “Extracting non-negative basis images using pixel dispersion penalty”, Pattern Recognition, vol. 45, no. 8, pp. 2912–2926, 2012.
[21] N. Pal, and S. Pal, “A review on segmentation techniques,” Pattern Recognition, vol. 26, no. 9, pp. 1277-1294, 1993.
[22] S. Angelina, L. P. Suresh, and S. H. Krishna Veni, “Image Segmentation Based On Genetic Algorithm for Region Growth and Region Merging,” Int’l Conf. Computing Electronics and Electrical Technologies. (ICCEET), 2012, pp. 970-974.
[23] S. Haifeng, “Clustering Color Image Segmentation Based on Maximum Entropy,” Proc. 2nd Int’l Conf. Computer Application and System Modeling. (ICCASM), 2012, pp. 1466- 1468.
[24] O. Veksler, Y. Boykov, and P. Mehrani, “Superpixels and supervoxels in an energy optimization framework,” Conf. European Computer Vision, 2010.
[25] A. Levinshtein, A. Stere, K. Kutulakos, D. Fleet, and S. Dickinson, and K. Siddiqi, “Turbopixels: Fast superpixels using geometric flows,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2290-2297, 2009
[26] P. Felzenszwalb, and D. Huttenlocher, “Efficient graph-based image segmentation,” Int’l J. Computer Vision, vol. 59, no. 2, pp. 167–181, 2004]
[27] J. Shi, and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000
[28] D. Comaniciu, and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603–619, 2002]
[29] A. Vedaldi, and S. Soatto, “Quick shift and kernel methods for mode seeking,” Proc. Conf. Computer Vision, 2008.
[30] L. Vincent, and P. Soille, “Watersheds in digital spaces: An efficient algorithm based on immersion simulations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 6, pp. 583–598, 1991.
[31] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE PAMI, pp. 2274–2282, 2012.
[32] S.P. Lloyd, “Least Squares Quantization in PCM,” IEEE Trans. Information Theory, vol. 28, no. 2, pp. 129-137, Mar. 1982.
[33] C. Elkan, “Using the Triangle Inequality to Accelerate K-Means,” Proc. Int’l Conf. Machine Learning, 2003.
[34] O. Verevka and J.W. Buchanan, “Local K-Means Algorithm for Color Image Quantization,” Proc. Graphics Interface, pp. 128-135, 1995
[35] A. Kumar, Y. Sabharwal, and S. Sen, “A Simple Linear Time (1+e)-Approximation Algorithm for K-Means Clustering in Any Dimensions,” Proc. Ann. IEEE Symp. Foundations of Computer Science, pp. 454-462, 2004.
[36] T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, and A.Y. Wu, “A Local Search Approximation Algorithm for K-Means Clustering,” Proc. 18th Ann. Symp. Computational Geometry, pp. 10-18, 2002.
[37] C. Elkan, “Using the Triangle Inequality to Accelerate K-Means,” Proc. Int’l Conf. Machine Learning, 2003.
[38] J. Kim, T. Han, Y. W. Tai, and J. Kim, “Salient Region Detection via High-Dimensional Color Transform,” Conf. Computer Vision and Pattern Recognition.(CVPR), 2014, pp. 883-890
[39] N. Guan, D. Tao, Z. Luo, and J. Shawe-Taylor. (2012) “MahNMF: Manhattan non-negative matrix factorization.” [Online]. Available: http://arxiv.org/abs/1207.3438.
[40] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2007,” http://www.pascalnetwork.org/challenges/VOC/voc2007.
[41] C. F´ evotte and J. Idier, “Algorithms for nonnegative matrix factorization with the beta-divergence,” Neural Computer, vol. 23, no. 9, pp. 2421–2456, 2011.
[42] P. O. Hoyer, “Non-negative sparse coding,” in Proc. IEEE Workshop Neural Netw. Signal Process, 2002, pp. 557–565.]
[43] J. Bobin, J. L. Starck, J. Fadili, and Y. Moudden, “Sparsity and morphological diversity in blind source separation,” IEEE Trans. Image Process., vol. 16, no. 11, pp. 2662–2674, 2007.
[44] W. S. Zheng, J. H. Lai, S. Liao, and R. He, “Extracting non-negative basis images using pixel dispersion penalty,” Pattern Recognition, vol. 45, no. 8, pp. 2912–2926, 2012.
[45] J. L. Roux, F.Weninger, and J. R. Hershey, “Sparse NMF – half-baked or well done?” Mitsubishi Electric Research Laboratories Technical Report, 2015.
[46] S. Seneff, J. Glass, Zue, “Speech database development at MIT: Timit and beyond,” Speech Communication, vol 9, no. 4, pp. 351-356, 1990.
[47] E. Vincent, R. Gribonval, and C. Fevotte, “Performance measurement in blind audio source separation,” IEEE Trans. Audio, Speech, Lang. Process., 2006.
[48] C. L. Hsu, and J. S. Jang, “On the improvement of singing voice separation for monaural recordings using the MIR-1 K dataset,” IEEE Trans. Audio, Speech, Lang. Process., vol. 18, no. 2, pp. 310–319, 2010.
[49] D. Cai, X. He, X. Wu, and J. Han, “Non-negative matrix factorization on manifold,” in Proc. IEEE Int. Conf. Data Mining (ICDM ’08), December 2008, pp. 63– 72.
[50] Q. Gu, J. Zhou, “Neighborhood preserving non-negative matrix factorization,” in Proc British Machine Vision Conference, 2009.
[51] Q. Gu, J. Zhou, “Local learning regularized nonnegative matrix factorization,” in Proc Int. Joint Conferences on Artificial Intelligence, 2009, pp. 1046–1051.
[52] Tuan Pham, Yuan-Shan Lee,Yan-Bo Lin,Tzu-Chiang Tai and Jia-Ching Wang, “Single Channel Source Separation Using Sparse NMF and Graph Regularization”, ASE BD&SI 2015, October 07 - 09, 2015, Kaohsiung, Taiwan.
[52] Christos Boutsidis and Efstratios Gallopoulos. “SVD-based initialization: A head start for nonnegative matrix factorization”. Pattern Recognition, 41(4): 1350-1362, 2008.
[53] Jingu Kim and Haesun Park, “Sparse Nonnegative Matrix Factorization for Clustering”, College of Computing, Georgia Institute of Technology, 266 Ferst Drive, Atlanta, GA 30332, USA.
[54] Cai, D., He, X., Han, J., Huang, “Graph regularized nonnegative matrix factorization for data representation”. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(8), 1548–1560 (2011).
[55] J. Karvanen, A. Cichocki, “Measuring sparseness of noisy signals,” in Proc. Int. International Symposium on Independent Component Analysis and Blind Signal Separation,( ICA), 2003, pp. 125–130.
[56] Yifeng Li, Alioune Ngom, “The non-negative matrix factorization toolbox for biological data mining”, licensee BioMed Central Ltd. 2013
[57] C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
[58] The ORL database of Face. Website: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
[59] S. A. Nene, S. K. Nayar and H. Murase, “Columbia Object Image Library (COIL-20)," Technical Report CUCS-005-96, February 1996.
[60] Database by Georgia Institute of Technology. Website: http://www.anefian.com/research/facereco.html.
[61] Chih-Jen Lin and Jorge J. Mor´e. “Newton’s method for large-scale bound constrained problems”. SIAM Journal on Optimization, 9:1100–1127, 1999.
指導教授 王家慶(Jia-Ching Wang) 審核日期 2016-8-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明