DC 欄位 |
值 |
語言 |
DC.contributor | 電機工程學系 | zh_TW |
DC.creator | 何誌祥 | zh_TW |
DC.creator | Chih-Hsiang Ho | en_US |
dc.date.accessioned | 2009-6-27T07:39:07Z | |
dc.date.available | 2009-6-27T07:39:07Z | |
dc.date.issued | 2009 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=965201104 | |
dc.contributor.department | 電機工程學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 在一般典型的分類(Classification)問題中,通常只有一些模糊且攏統的資訊,以及一些來自於待分類樣本的特定子集合,所以必須利用這些現有的資訊,尋找有效的方法設計出正確的分類器。其中,支持向量機(Support Vector Machines;SVMs)在處理資料的類重疊(Class Overlap)上擁有優異的分類性能,但是關於類別資料不平衡(Data Imbalance)的資訊,在支持向量機裡是無法得知的。由於,貝葉斯決策理論(Bayesian Decision Theory)擁有樣本在機率統計上的有效訊息,因此本篇文章利用了貝葉斯決策理論的輔助,將之與支持向量機做結合,根據非線性規劃(Nonlinear Programming),演算出貝氏支持向量機(Bayesian Support Vector Machines;BSVMs)。此外貝葉斯決策平面(Decision surface)方程式具有穿越點x0(Passing through point)這個有用的訊息,它可以清楚反映出類別間樣本數量不平衡的情形。因此,為了將穿越點x0的資訊充分應用在支持向量機上,藉以改變了原本支持向量機超平面(Hyperplane)方程式的型式,進而發展出貝氏支持向量機。貝葉斯決策理論的穿越點x0表現出類不平衡的情況;支持向量機的非線性規劃方法解決了類重疊的問題,使得此分類器訊息更加完善。
在本實驗模擬中,將貝氏支持向量機之穿越點x0分成三種類型,其實驗結果皆與支持向量機進行比較。結果顯示它可以用較小懲罰參數c(Penalty parameter)使得測試誤差能最小,也就是說有較平滑(Smooth)的超平面,這樣亦可以避免過學習(Overfitting)的情形,所以貝氏支持向量機擁有較好的分類能力。
| zh_TW |
dc.description.abstract | Classification approaches usually present the poor generalization performance with an apparent class imbalance problem. There are many researches reported in the literature that the characteristics of data sets have strongly influenced the performance of different classifiers. Therefore, a study in the cognition is conceived, and it is naturally suggested the Bayesian methods participated. It easily captures valuable quantitative features from data sets, and then to update these previous classification problems to guarantee well class separability. The purpose of this learning method was to lead an attractive pragmatic feature of the Bayesian approach in the quantitative description of class imbalance problem. Thus, a programming problem of mixing probability information: Bayesian Support Vector Machines (BSVMs) is discussed. In the framework, we must change some of the aims and conditions of the original programming problems and study what results will be achieved because of the change. The experiments on several existing data sets show that, when prior distributions are assigned to the programming problem, the estimated classification errors are reduced by experimental evidence.
| en_US |
DC.subject | 類不平衡 | zh_TW |
DC.subject | 分類 | zh_TW |
DC.subject | 支持向量機 | zh_TW |
DC.subject | 貝葉斯理論 | zh_TW |
DC.subject | Support Vector Machines | en_US |
DC.subject | Bayesian Decision Theory | en_US |
DC.subject | Classification | en_US |
DC.subject | Umbalance | en_US |
DC.title | 基於貝氏資訊之萃取應用於支持向量機之類不平衡分類問題 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | Design of Bayesian-based Knowledge Extraction for SVMs in Unbalanced Classifications | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |