博碩士論文 965201104 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator何誌祥zh_TW
DC.creatorChih-Hsiang Hoen_US
dc.date.accessioned2009-6-27T07:39:07Z
dc.date.available2009-6-27T07:39:07Z
dc.date.issued2009
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=965201104
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract在一般典型的分類(Classification)問題中,通常只有一些模糊且攏統的資訊,以及一些來自於待分類樣本的特定子集合,所以必須利用這些現有的資訊,尋找有效的方法設計出正確的分類器。其中,支持向量機(Support Vector Machines;SVMs)在處理資料的類重疊(Class Overlap)上擁有優異的分類性能,但是關於類別資料不平衡(Data Imbalance)的資訊,在支持向量機裡是無法得知的。由於,貝葉斯決策理論(Bayesian Decision Theory)擁有樣本在機率統計上的有效訊息,因此本篇文章利用了貝葉斯決策理論的輔助,將之與支持向量機做結合,根據非線性規劃(Nonlinear Programming),演算出貝氏支持向量機(Bayesian Support Vector Machines;BSVMs)。此外貝葉斯決策平面(Decision surface)方程式具有穿越點x0(Passing through point)這個有用的訊息,它可以清楚反映出類別間樣本數量不平衡的情形。因此,為了將穿越點x0的資訊充分應用在支持向量機上,藉以改變了原本支持向量機超平面(Hyperplane)方程式的型式,進而發展出貝氏支持向量機。貝葉斯決策理論的穿越點x0表現出類不平衡的情況;支持向量機的非線性規劃方法解決了類重疊的問題,使得此分類器訊息更加完善。 在本實驗模擬中,將貝氏支持向量機之穿越點x0分成三種類型,其實驗結果皆與支持向量機進行比較。結果顯示它可以用較小懲罰參數c(Penalty parameter)使得測試誤差能最小,也就是說有較平滑(Smooth)的超平面,這樣亦可以避免過學習(Overfitting)的情形,所以貝氏支持向量機擁有較好的分類能力。 zh_TW
dc.description.abstractClassification approaches usually present the poor generalization performance with an apparent class imbalance problem. There are many researches reported in the literature that the characteristics of data sets have strongly influenced the performance of different classifiers. Therefore, a study in the cognition is conceived, and it is naturally suggested the Bayesian methods participated. It easily captures valuable quantitative features from data sets, and then to update these previous classification problems to guarantee well class separability. The purpose of this learning method was to lead an attractive pragmatic feature of the Bayesian approach in the quantitative description of class imbalance problem. Thus, a programming problem of mixing probability information: Bayesian Support Vector Machines (BSVMs) is discussed. In the framework, we must change some of the aims and conditions of the original programming problems and study what results will be achieved because of the change. The experiments on several existing data sets show that, when prior distributions are assigned to the programming problem, the estimated classification errors are reduced by experimental evidence. en_US
DC.subject類不平衡zh_TW
DC.subject分類zh_TW
DC.subject支持向量機zh_TW
DC.subject貝葉斯理論zh_TW
DC.subjectSupport Vector Machinesen_US
DC.subjectBayesian Decision Theoryen_US
DC.subjectClassificationen_US
DC.subjectUmbalanceen_US
DC.title基於貝氏資訊之萃取應用於支持向量機之類不平衡分類問題zh_TW
dc.language.isozh-TWzh-TW
DC.titleDesign of Bayesian-based Knowledge Extraction for SVMs in Unbalanced Classificationsen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明