博碩士論文 106453018 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊管理學系在職專班zh_TW
DC.creator蔡武霖zh_TW
DC.creatorWU-LIN TSAIen_US
dc.date.accessioned2019-7-3T07:39:07Z
dc.date.available2019-7-3T07:39:07Z
dc.date.issued2019
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=106453018
dc.contributor.department資訊管理學系在職專班zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract機器學習在Google Alpha Go出現之後再次受到矚目,這也顯現出收集資料的重要性。但在現實生活中,資料收集時的困難與限制會造成收集資料的不平均。這容易使得分類困難與不準確,因為特徵選取與不平衡處理(抽樣)後,影響分類器在向量空間中的學習與分類效能。本研究使用知名公開網站的資料集,並設計二個流程來發掘類別不平衡問題,而特徵選取與抽樣誰該放置於前或後,使用五種不平衡處理抽樣模組,分別為三增加少數抽樣法、二減少多數抽樣法放置於前後,另外特徵選取使用二種模組,並加入有無正規化在這二項流程上。分類器目前在類別不平衡中,最常被使用支持向量機(SVM)與決策樹(Decision Tree Classifier)的二種分類器進行分類。從本研究實驗過程得知,類別不平衡資料在先執行特徵選取之後,再執行不平衡處理(抽樣),低資料量在抽樣後為 SMOTE 增加少數抽樣法,高資料量在抽樣後Random為減少多數抽樣法(Under Sampling),特徵選取小於20建議使用PCA,20維以上使用GA,分類器SVM 為佳的分類器,至於資料是否要正規化為決策樹不使用、支持向量機使用。zh_TW
dc.description.abstractAfter the invention of Alpha Go, machine learning caught the public eye and showed us the essential need for data collection. Nevertheless, in reality, data collection is often uneven owing to its many difficulties and confinement. Feature selection and imbalanced (Sampling) have inherent impacts on Classifier in vector space. This in turn impacts the ability of learning and classification which also leads to difficulty and inaccuracy during data classification. This research aims to utilize data from public websites to design two processes to excavate imbalanced (Sampling), feature selection and place sampling in the beginning and at the end. It will utilize five examples of imbalanced (sampling); three examples of increased over sampling and two of reduced under sampling placed in the beginning and the back. Moreover, it will use two different models and utilize normalization with non-normalization in the two processes. Classifier in class imbalanced is often used to support vector machines and decision trees two model. From this research, we can find out that class imbalanced need use after then use feature selection, SMOTE is when low data amounts after sampling increase over sampling. Random is when high data amounts after sampling reduce under sampling. It is recommended to use PCA when feature selection is under 20 dimensions, as GA is recommended if feature selection is above 20 dimensions. Moreover, the ideal classifier is SVM. When it comes to the question of utilizing normalization in data, we can utilize classification to selection. decision tree abandons it. support vector machines use it.en_US
DC.subject機器學習zh_TW
DC.subject資料探勘zh_TW
DC.subject類別不平衡zh_TW
DC.subject抽樣zh_TW
DC.subject特徵選取zh_TW
DC.subjectMachine Learningen_US
DC.subjectData Miningen_US
DC.subjectClass Imbalanced Problemen_US
DC.subjectSamplingen_US
DC.subjectFeature Selectionen_US
DC.title資料淨化於類別不平衡問題: 機器學習觀點zh_TW
dc.language.isozh-TWzh-TW
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明