博碩士論文 955201083 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:39 、訪客IP:18.191.228.88
姓名 韓明峰(Ming-Feng Han)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 具有不確定參數C之模糊支持向量機
(Fuzzy Support Vector Machines with the Uncertainty of Parameter C)
相關論文
★ 影像處理運用於家庭防盜保全之研究★ 適用區域範圍之指紋辨識系統設計與實現
★ 頭部姿勢辨識應用於游標與機器人之控制★ 應用快速擴展隨機樹和人工魚群演算法及危險度於路徑規劃
★ 智慧型機器人定位與控制之研究★ 基於人工蜂群演算法之物件追蹤研究
★ 即時人臉偵測、姿態辨識與追蹤系統實現於複雜環境★ 基於環型對稱賈柏濾波器及SVM之人臉識別系統
★ 改良凝聚式階層演算法及改良色彩空間影像技術於無線監控自走車之路徑追蹤★ 模糊類神經網路於六足機器人沿牆控制與步態動作及姿態平衡之應用
★ 四軸飛行器之偵測應用及其無線充電系統之探討★ 結合白區塊視網膜皮層理論與改良暗通道先驗之單張影像除霧
★ 基於深度神經網路的手勢辨識研究★ 人體姿勢矯正項鍊配載影像辨識自動校準及手機接收警告系統
★ 模糊控制與灰色預測應用於隧道型機械手臂之分析★ 模糊滑動模態控制器之設計及應用於非線性系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在圖形識別(Pattern Recognition)中,常希望可以在雜亂無章的原始資料裡,設法挖掘出其規律及特徵,進而輔助分類器在決策上的分析。因此,本文研究目的是藉由訓練樣本間的模糊性質採掘出有用的特徵,以改善支持向量機(Support Vector Machines;SVMs)的性能。在理論建構上,本文假設訓練樣本分布為兩個高斯密度函數且具有類重疊(Class Overlap)現象,一個具有模糊性質的類重疊分布可以通過機率密度函數的交集加以確定。因此,對於訓練樣本的模糊歸屬函數(Fuzzy Membership Function),本文將依據支持向量(Support Vectors)的特性加以建構,例如:落在間隔(Margin)內的訓練樣本即支持向量,一般發生在類重疊中心區域,它們對於決策邊界(decision boundary)的建立,提供較多的貢獻,所以歸屬函數的確定過程中將給予較大的權重。另外,落在間隔外的支持向量,一般發生在較遠離類重疊交點,由於本身是訓練誤差(Training Error)且對於決策邊界的貢獻較少,所以歸屬函數值予以較小的權重。
實際中,在支持向量機的目標函數設計上,本文利用歸屬函數與參數C重新定義一個模糊懲罰參數(Fuzzy-penalizing parameter),以每個訓練樣本存有的不同貢獻度去平衡間隔大小與訓練誤差,最後呈現一種新的且有效率的模糊支持向量機(Fuzzy Support Vector Machines;FSVMs)。為了驗證這個分類器,我們從UCI資料庫中解決四個真實世界中的分類問題。實驗1進行與傳統支持向量機的比較,其結果顯示模糊支持向量機有較好的性能且是一個具有價值的分類器。實驗2進行不同歸屬函數的比較,其結果證實本論文呈現的歸屬函數建立法是較可行且較客觀的方法。
摘要(英) In typical pattern recognition applications, there are usually only some vague and general knowledge about the situation. An optimal classifier will be definitely hard to develop if the decision function lacks sufficient knowledge. The aim of our experiments is to extract some features by some appropriate transformation of the training data set. In this thesis, we assume that the training samples are drawn from a Gaussian distribution. We also assume that if the data sets are in an imprecise situation, such as classes overlap. The overlap can be represented by fuzzy sets. Therefore, a fuzzy membership can be created according to the property of class overlap. For example, one can treat the closer training data of decision boundary as Support Vectors (SVs) in the center of classes overlap and let these points have higher degree of the fuzzy membership. That is because these points have higher contribution to the decision boundary. Relatively, one can treat the father training data of the decision boundary as SVs outside the margin and let these points have lower degree of fuzzy membership. In Support Vector Machines (SVMs), we define a fuzzy-penalizing parameter to balance both margin width and model complexity.
Finally, a powerful learning classifier is shown. It is the Fuzzy Support Vector Machines with the Uncertainty of Parameter C rule (FSVMs-UPC). In order to verify this classifier, the proposed method is compared with traditional SVM in experiment 1. Results show that the proposed FSVMs-UPC is superior to the traditional SVM in terms of both testing accuracy rate and stability. Experiment 2 shows our membership generation method concentrate on overlapping is a more feasible and better membership.
關鍵字(中) ★ 模糊理論
★ 支持向量機
★ 不確定性
關鍵字(英) ★ Fuzzy set
★ Support Vector Machines
★ Uncertainty
論文目次 Abstract ……………….…………………………………… I
Contents ……………………………………………………. III
List of Figures ……………………………………………. VI
List of Tables ……………………………………………. IX
Chapter 1 Introduction ………………………………………1
1.1 Background ………………………………………1
1.2 Purpose and Motivation …………………3
1.3 Contribution …………………………………4
1.4 Organization ……………………………4
Chapter 2 Support Vector Machines ………………………5
2.1 Linear Support Vector Machines ………………………5
2.2 A Separable Case ……………………………….5
2.3 A Non-Separable Case……………………………8
2.2 Nonlinear Support Vector Machines …………………10
2.2.1 Kernels ………………………………………….10
2.2.2 Nonlinear Model……………………………………11
2.3 Learning Curves …………………………………13
Chapter 3 Fuzzy Support Vector Machines ……….………17
3.1 Fuzzy theory in training data………………….………17
3.2 Formulation of FSVMs ……………………………………18
3.2.1 FSVM Framework…………………………………………20
3.3 Creation of a Fuzzy Membership …………………………22
3.3.1 Fuzzy Membership Focus On One Class ……………22
3.3.2 Fuzzy Membership Focus On Overlapping …23
Chapter 4 Experimental Results and Discussion………27
4.1 Data Sets………………………………………27
4.2 Experiment 1 ……………………………………………28
4.2.1 Training Phase ………………………………………28
4.2.2 Testing Phase …………………………………………37
4.3 Experiment 2 ……………………………………………40
4.3.1 Training Phase ……………………40
4.3.2 Testing Phase …………………47
Chapter 5 Conclusions and Recommendations…50
References …………………………52
Appendix I Data Set……………………………..……………54
List of Publications …………………………..……………60
參考文獻 [1] V. N. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, Berlin Heidelberg, New York, 1995.
[2] V. N. Vapnik, “An Overview of Statistical Learning Theory,” IEEE Transaction on Neural Networks, Vol. 10, pp 988-999, 1999.
[3] V. N. Vapnik, Statistical Learning Theory, Wiley, New York, 1998.
[4] C. F. Lin and S. D. Wang, “Fuzzy Support Vector Machine,” IEEE Transaction on Neural Networks, Vol. 13,No.2, 2002 .
[5] D. M. J. Tax, and R. P. W. Duin, “Characterizing One-Class Datasets,” In Proceedings of the 16th Annual Symposium of the Pattern Recognition Association of South Africa, pp 21–26 , 2005.
[6] R. C. Prati, G. E. A. P. A. Batista, and M. C. Monard, “Class Imbalances versus Class Overlapping: an Analysis of a Learning System Behavior,” In Mexican International Conference on Artificial Intelligence, pp 312–321,2004.
[7] B. Schölkopf, P. Simard, A. Smola and V. Vapnik, “Prior Knowledge in Support Vector Kernels,” In M. Jordan, M. Kearns, S. Solla, editors. Advances in Neural Information Processing System 10. MIT Press, pp 312-321, 1998.
[8] L. A. Zadeh, Fuzzy sets, Information and Control, Vol. 8, pp 338–353, 1965.
[9] Yongqiao Wang, Shouyang Wang, and K. K. Lai, “A New Fuzzy Support Vector Machine to Evaluate Credit Risk,” IEEE Transaction on Fuzzy systems, vol. 13, no. 6, 2005.
[10] I. Guyon, N. Matic, and V. N. Vapnik, Discovering Information Patterns and Data Cleaning. Cambridge, MA: MIT Press, 1996.
[11] X. Zhang, “Using class-center vectors to build support vector machines,” International Workshop on Neural Networks for Signal Processing, pp. 3–11, 1999.
[12] Hastie, T., Tibshirani, R. and Friedman, J., The elements of statistical learning: Data mining, inference, and prediction, Springer-Verlag., New York, 2001.
[13] K. K. Lee, S. R. Gunn, C. J. Harris, and P. A. S. Reed, “Classification of unbalanced data with transparent kernels,” Conference on Neural Networks, vol. 4, pp. 2445, 2001.
[14] A. T. Quang, Q.-L. Zhang, and X. Li , “Evolving support vector machine parameters,” International Conference on Machine Learning and Cybernetics, vol. 1, pp. 548, 2002.
[15] L. Breiman, Bias, Variance and Arcing Classifiers, Technical Report 460, Statistics Department, University of California, CA, 1996.
[16] P. M. Murphy, UCI-Benchmark Repository of Artificial and Real Data Sets, http://www.ics.uci.edu/~mlearn, University of California Irvine, CA, 1995.
[17] P. Vlachos, and M. Meyer, StatLib Biomed data, http://lib.stat.cmu.edu/ , Department of Statistics, Carnegie Mellon University , 1989.
指導教授 鍾鴻源(Hung-Yuan Chung) 審核日期 2008-7-9
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明