博碩士論文 955201070 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:10 、訪客IP:3.139.69.17
姓名 黃勇迪(Yong-Di Huang)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 應用主成份分析及支持向量機於特徵擷取之研究
(Feature Extraction Using Principal Component Analysis and Support Vectors Machines)
相關論文
★ 小型化 GSM/GPRS 行動通訊模組之研究★ 語者辨識之研究
★ 應用投影法作受擾動奇異系統之強健性分析★ 利用支撐向量機模型改善對立假設特徵函數之語者確認研究
★ 結合高斯混合超級向量與微分核函數之 語者確認研究★ 敏捷移動粒子群最佳化方法
★ 改良式粒子群方法之無失真影像預測編碼應用★ 粒子群演算法應用於語者模型訓練與調適之研究
★ 粒子群演算法之語者確認系統★ 改良式梅爾倒頻譜係數混合多種語音特徵之研究
★ 利用語者特定背景模型之語者確認系統★ 智慧型遠端監控系統
★ 正向系統輸出回授之穩定度分析與控制器設計★ 混合式區間搜索粒子群演算法
★ 基於深度神經網路的手勢辨識研究★ 人體姿勢矯正項鍊配載影像辨識自動校準及手機接收警告系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 研究上指出資料本身的特性會直接影響到分類能力。因此我們設計出一種資料研究的方法,將特徵做最好的應用。在一些模糊不清的資料上增加必要的特徵,提高模式識別的應用,以保證類別分離性。本論文結合主成份分析(PCA)於特徵擷取之研究。因此我們提出了LPCSVM和FCLSVM兩種演算法。
在LPCSVM演算法中,外部的類別標籤被視為有用的特徵資訊並將其加入原始資料中,而形成一個新增加的資料集。對於支持向量機(SVM)而言,主成份分析的功能是擷取這些新資料的特徵。在FCLSVM演算法中,我們討論相同的類別標籤觀念,將第一主成份當作一個代表性的指標於增加的資料集中。如此,這些代表性的第一主成份可以成功經由數學式計算而被呈現;且於分類之前, 對於任何驗證與測試資料也能做相同的轉換。
實驗數據顯示,應用代表性指標資料,分類誤差將會被降低。這結果證實代表性的指標提供給特徵擷取額外有價值的資訊。
摘要(英) Several studies have been reported that the characteristics of data sets are directly correlated with the capability of the classifier. Therefore, a study in the cognition is conceived, and we suggest the feature optimization. It adds necessary features based on some vague and insufficient knowledge in the pattern recognition applications to guarantee class separability. We present that the available resource of class labels and feature extraction concepts of principal component analysis (PCA) can be applied to the feature optimization problem. Thus, we propose the LPCSVM and FCLSVM to set a sufficient number of features compensating for the lack of information.
In the LPCSVM algorithm, the class labels of outputs firstly are regarded as useful feature information, and thus they are incorporated into the original inputs to form a new augmented data set. Then principal component analysis (PCA) is applied to the augmented data to extract features for support vector machines (SVM) classification. Above all, in the FCLSVM algorithm we discuss the concept of an equivalent class label, which describes this first principal component as a kind of representative label in the augmented data set. In this way, the representative indices can be successfully represented by a mathematical function in the first principal component form, which is benefiting any validation set and test set subjected to the same transformation before it is classified by the classifier.
The experiments on several existing data sets show that, when the augmented data are utilized, the classification errors estimated are reduced by experimental evidence. This implies that the class labels can be used as extra helpful information to feature extraction.
關鍵字(中) ★ 主成份分析
★ 支持向量機
關鍵字(英) ★ support vector machines
★ principal companent analysis
論文目次 LIST OF FIGURES .......................................................................................................... III
LIST OF TABLES ............................................................................................................. IV
CHAPTER 1 INTRODUCTION .................................................................................. 1
1.1 Motivation and background ...................................................................................... 1
1.2 Organization and main tasks ..................................................................................... 2
CHAPTER 2 LITERATURE REVIEW OF SVM AND PCA .................................... 4
2.1 Support vector machines ........................................................................................... 4
2.1.1 Linear SVM classifier: separable case ............................................................... 4
2.1.2 Linear SVM classifier: non-separable case ........................................................ 7
2.1.3 Nonlinear SVM classifiers ................................................................................. 8
2.2 Multi-class support vector machines ......................................................................... 11
2.2.1 One-against-all method ...................................................................................... 12
2.2.2 One-against-one method .................................................................................... 13
2.3 Learning curves ......................................................................................................... 15
2.4 Principal component analysis .................................................................................... 17
CHAPTER 3 ALGORITHMS OF USING THE CLASS LABEL FOR
CLASSIFICATION .................................................................................
21
3.1 The method of LPCSVM algorithm .......................................................................... 21
3.2 The method of FCLSVM algorithm .......................................................................... 24
3.3 The framework of the proposed algorithms .............................................................. 26
CHAPTER 4 EXPERIMENTS AND DISCUSSION .................................................. 29
4.1 The data sets .............................................................................................................. 29
4.2 The results of using LPCSVM algorithm .................................................................. 30
4.2.1 Changes in training phase .................................................................................. 30
4.2.2 Generalization performance in testing phase ..................................................... 34
4.3 The results of using FCLSVM algorithm .................................................................. 38
4.3.1 The recognition results of inside tests ................................................................ 38
4.3.2 The results of a validation study ........................................................................ 46
CHAPTER 5 CONCLUSIONS AND FUTURE WORKS .......................................... 54
5.1 Conclusions ............................................................................................................... 54
5.2 Future works ............................................................................................................. 55
REFERENCES .................................................................................................................. 56
參考文獻 [1] V. N. Vapnik, The nature of statistical learning theory., Springer-Verlag, Berlin Heidelberg, New York, 1995.
[2] V. N. Vapnik, “An overview of statistical learning theory”, IEEE Transaction on Neural Networks, Vol. 10, pp 988-999, 1999.
[3] V. N. Vapnik, Statistical learning theory, Wiley, New York, 1998.
[4] R. Rifin and A. Klautau, “ In defense of one-vs-all classification”, Journal of Machine Learning Research, Vol. 12, pp 101-141, 2004.
[5] U. Kreβel, “Pairwise classification and support vector machines”, In Advances in Kernel Methods: Support Vector Learnings, pp 255-268, MIT Press, Cambridge, MA, 1999.
[6] C. F. Lin and S. D. Wang, “Fuzzy support vector machines”, IEEE Transactions on Neural Networks, Vol. 13, pp 464-471, March 2002.
[7] C. F. Lin and S. D. Wang, “Training algorithms for fuzzy support vector machine with noisy data”, Pattern Recognition Letters Archive, Vol. 25, pp 1647-1656, 2004.
[8] D. M. J. Tax and R. P. W. Duin, “Characterizing one class datasets”, In Proceedings of the 16th Annual Symposium of the Pattern Recognition Association of South Africa, pp 21-26, 2005.
[9] R. C. Prati, G. E. A. P. A. Batista and M. C. Monard, “Class imbalances versus class overlapping: an analysis of a learning system behavior”, In MICAI, pp 312-321, 2004.
[10] B. Schölkopf, P. Simard, A. Smola and V. Vapnik, “Prior Knowledge in Support Vector Kernels,” Advances in Neural Information Processing System 10. MIT Press, pp 312-321, 1998.
[11] T. Inoue and S. Abe, “Fuzzy support vector machines for pattern classification”, In Proceedings of International Joint Conference on Neural Networks, Vol. 2, pp 1449-1454, July 2001.
[12] S. Abe and T. Inoue, “Fuzzy support vector machines for multiclass problems”, In European Symposium on Artificial Neural Networks Bruges, pp 113-118, Belgium, April 2002.
[13] H. Lei and V. Govindaraju, “Speeding up multi-class SVM evaluation by PCA and feature selection”, The 5th SIAM International Conference on Data Mining, Newport Beach, CA, April 25, 2005.
[14] L. J. Cao, K. S. Chua. W. K. Chong, H. P. Lee and Q. M. Gu, “A comparison of PCA, KPCA and ICA for dimensionality reduction in support vector machine”, Neurocomputing, Vol. 55, No. 1-2, pp 321-336, September 2003.
[15] I. Jollife, Principal component analysis., Springer-Verlag, New York, 1986.
[16] I. Guyon and A. Elisseeff, “An introduction to variable and feature selection”, Journal of Machine Learning Research, Vol. 3, pp 1157-1182, 2003.
[17] B. Schölkopf and A. Smola, Learning with kernels: support vector machines, regularization, optimization, and beyond., MIT Press, Cambridge, MA, 2001.
[18] R. O. Duda, P. E. Hart and D. G. Stork, Pattern classification 2nd., pp 282-296, Wiley, New York, November 2000.
[19] T. Hastie, R. Tibshirani and J. Friedman, The elements of statistical learning: Data mining, inference, and prediction., pp 191-217, Springer-Verlag., New York, 2001.
[20] W. J. Krzanowski, Principles of multivariate analysis: a user’s perspective., pp 33-83, Oxford University Press., USA, 1990.
[21] R. A. Johnson and D. W. Wichern, Applied multivariate statistical analysis 4th., pp 458-512, Englewood cliffs., NJ:Prentice-Hall, 1998.
[22] L. Breiman, Bias, Variance and Arcing Classifiers, “Technical Report 460”, Statistics Department, University of California, CA, 1996.
[23] P. M. Murphy, UCI-Benchmark Repository of Artificial and Real Data Sets, http://www.ics.uci.edu/~mlearn, University of California Irvine, CA, 1995.
[24] P. Vlachos, and M. Meyer, StatLib, http://lib.stat.cmu.edu/ , Department of Statistics, Carnegie Mellon University , 1989.
指導教授 莊堯棠(Yau-Tarng Juang) 審核日期 2008-6-18
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明