English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41269706      線上人數 : 318
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77553


    題名: 在樣本選取數限制下,以非監督式學習進行樣本選取之研究
    作者: 李世平;Li, Shih-Ping
    貢獻者: 資訊管理學系
    關鍵詞: 文件分類;離群值偵測;K-Means;Balanced K-Means
    日期: 2018-07-02
    上傳時間: 2018-08-31 14:48:10 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,由於運算技術的進步以及資料儲存空間的躍進,使許多學者開始研究資料探勘以及巨量資料等大數據相關領域,以期在大量資料找出隱藏的價值並產生出許多相關的應用,如:使用分類器預測文章的所屬類別等。在分類器的建立過程中,若能挑選能代表整體資料的訓練資料,其訓練出來的分類器會得到較好的訓練結果。在選取完訓練資料後,為了訓練分類器會將訓練資料交由專家貼上所屬的標籤,但是請專家貼標籤的成本昂貴且貼的標籤數量有限,所以在訓練資料的選取上,需要利用抽樣的方式從整體資料中挑出最具有代表性的訓練資料來訓練分類器才會使得訓練資料達到最大的效用,換句話說,也就是在樣本選取數限制的條件下,如何從未標籤過的資料中挑選出最適合的訓練資料來進行標籤即為本研究的目的。
    本研究著重在訓練資料數量被限制的條件下,以非監督式學習進行樣本選取之過程,在實驗中首先會將原始資料的離群值移除,並使用K-Means以自然群數將資料進行分群以找出符合「完整性」的資料,再以Balanced K-Means將K-Means的分群結果依據群集大小佔原始資料的比例再分群,並且挑選出每個群集的中心點作為符合「完整性」及「平衡性」的未標籤資料貼上標籤,並利用已標籤的資料透過五種不同的分類方法進行建模,用以衡量挑選資料所建立的分類器之分類結果,換句話說,若是挑選出的訓練資料所訓練的分類器有較佳的分類結果,即代表本研究提出之方法可以在樣本選取數限制下挑選出最適合的訓練資料。
    實驗結果呈現,本研究方法在KNN、Naïve Bayes、SVM、MLP方法中都會有良好的分類結果,唯獨在Random Forest中的結果不如預期,由此結果可以觀察出在非空間及距離概念設計的分類方法無法建立準確度高的分類器,因為其與本研究方法之設計理念不合,反之,若是分類方法所涵蓋的屬性夠完整,本研究提出之方法都能夠在樣本選取數受限制的條件下找出最適合的訓練資料。;In recent years, with the progress of the computing technology and the storage space, many researchers start to research the field of the Data Mining and the Big Data in order to find the value of numerous data and come up with innovative usages. Such as, but not limited to, using classifiers to discriminate the categories of articles and so on. When building a classifier, a more comprehensive training data will come to a better result., so that we select the training data in dataset and label the training data manually by experts. However, the cost of hiring experts is high and the output is limited, we have to select the comprehensive sample data to maximize the utility of training data. In other words, how to select the best training data in the unlabeled dataset with the constraint of the sample data number is the research purpose of this study
    This study focused on using unsupervised learning to select samples with the constraint of the sample data number. In this thesis, we start to remove the outliers of the dataset, then we use K-Means to find the training data which contain all typical types in the datasets, after that, we use Balanced K-Means to cluster every clusters of K-means result according to the percentage of cluster size in the dataset. At last, we pick up the “centroid” as the best training data and label it by experts. These training materials then are modeled by five different classifiers to measure the classification of classifiers that were established by the select data. In other words, if the classification of classifiers that were established by the select data is good, it means the method we proposed can select the best training data under sample data number considerations.
    Finally, the experimental results show that the method we proposed has good results in KNN、Naïve Bayes、SVM、MLP but Random Forest. According to this result, we can find that the classifier which is not established by the concept of space and the distance has the lower classification result, because it does not match the method designed concept of this study. On the other hand, the method we proposed can select the best training data with the constraint of sample data number when the classifier contains all of the attributes.
    顯示於類別:[資訊管理研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML97檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明