博碩士論文 106423055 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:201 、訪客IP:3.136.236.126
姓名 柯懿修(YI-HSIU KO)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱 類深度決策樹對不完整資料預測之比較與研究
相關論文
★ 利用資料探勘技術建立商用複合機銷售預測模型★ 應用資料探勘技術於資源配置預測之研究-以某電腦代工支援單位為例
★ 資料探勘技術應用於航空業航班延誤分析-以C公司為例★ 全球供應鏈下新產品的安全控管-以C公司為例
★ 資料探勘應用於半導體雷射產業-以A公司為例★ 應用資料探勘技術於空運出口貨物存倉時間預測-以A公司為例
★ 使用資料探勘分類技術優化YouBike運補作業★ 特徵屬性篩選對於不同資料類型之影響
★ 資料探勘應用於B2B網路型態之企業官網研究-以T公司為例★ 衍生性金融商品之客戶投資分析與建議-整合分群與關聯法則技術
★ 應用卷積式神經網路建立肝臟超音波影像輔助判別模型★ 基於卷積神經網路之身分識別系統
★ 能源管理系統電能補值方法誤差率比較分析★ 企業員工情感分析與管理系統之研發
★ 資料淨化於類別不平衡問題: 機器學習觀點★ 資料探勘技術應用於旅客自助報到之分析—以C航空公司為例
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著網路的進步,產生的資料量越來越多,如何有效的運用資料就變的很重要,這也讓資料探勘的技術更加精進成熟,然而遺漏值的問題一直存在於資料中很難避免,因此學者們利用統計方法、機器學習方法,做了許多填補遺漏值的研究,期許降低遺漏值對預測的影響,但在直接處理法的研究較少。
本研究因此提出一個基於深度學習中滑動視窗與邊界框概念的決策樹直接處理法:類深度決策樹(Deep Learning Oriented Decision Tree),依照不同的視窗大小而切割資料集,建立多棵決策樹,最後再進行投票得出預測結果。本研究分成兩個實驗,實驗一主要是類深度決策樹與單一決策樹的比較,實驗二主要是類深度決策樹與其他處理遺漏值方法的比較,實驗一與二中又再分為(A)、(B)兩小實驗,探討測試資料集是否遺漏的差異。於實驗後的結果得知,在19維以上資料使用類深度決策樹直接處理不完整資料得到的分類正確率結果最好。相信這樣的貢獻能協助未來研究者能更恰當且有效率的處理遺漏值問題,能夠產生表現更佳的預測模型。
摘要(英) The advancement of network makes the amount of data produced increasing rapidly. How to use the data effectively becomes very important, which makes the data mining technology more sophisticated. However, the problem of missing values has always been difficult to avoid in the collected data. Therefore, scholars have used statistical and machine learning methods to do a lot of researches for the imputation of missing values, and hope to reduce the impact of missing values on predictions, but there are few studies focusing on another type of solution by directly handling the datasets with missing values.
Therefore, this thesis proposes a novel approach based on the concept of sliding window and bounding box in deep learning, namely “Deep Learning Oriented Decision Tree”. In this approach, the dataset is divided into several subsets according to different window sizes, and each subset is used to build a decision tree, resulting in decision tree ensembles, and the final prediction result is based on the voting method. There are two experimental studies in this thesis. Study 1 is based on a comparison between Deep Learning Oriented Decision Tree and a single decision tree, and Study 2 for a comparison between Deep Learning Oriented Decision Tree and other missing value imputation methods. Moreover, the testing data with missing values are also considered in the two studies. According to the results of the experiment, the proposed approach performs the best in terms of classification accuracy over higher dimensional datasets. It is believed that such a contribution can help future researchers to deal with missing value problems more appropriately and efficiently, and to produce better performing prediction models.
關鍵字(中) ★ 資料探勘
★ 資料前處理
★ 遺漏值
★ 機器學習
★ 深度學習
關鍵字(英) ★ Data Mining
★ Data Pre-processing
★ Missing Values
★ Machine Learning
★ Deep Learning
論文目次 摘要 i
Abstract ii
目錄 iii
圖目錄 v
表目錄 viii
一、緒論 1
1-1研究背景 1
1-2研究動機 2
1-3研究目的 4
1-4研究架構 5
二、文獻探討 7
2-1遺漏值介紹 7
2-1-1完全隨機遺漏 7
2-1-2隨機遺漏 8
2-1-3非隨機遺漏 9
2-2遺漏值處理 10
2-2-1刪除法 10
2-2-2直接處理法 11
2-2-3補值法 12
2-3深度學習介紹 15
三、研究方法 19
3-1實驗架構 19
3-1-1實驗準備 19
3-2實驗一(A) 20
3-2-1類深度決策樹 21
3-2-2類深度決策樹滑動步長 24
3-3實驗一(B) 25
3-4實驗二(A) 26
3-5實驗二(B) 27
四、研究結果 28
4-1分類正確率 28
4-2實驗結果 28
4-2-1實驗一(A)結果 28
4-2-2實驗一(B)結果 41
4-2-3實驗二(A)結果 53
4-2-4實驗二(B)結果 65
4-3實驗小結 77
五、結論與未來研究方向 83
5-1結論與貢獻 83
5-2未來展望 83
參考文獻 85
附錄一 89
參考文獻 [1] Hand, D. J. (2006). Data Mining. Encyclopedia of Environmetrics, 2.
[2] Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI magazine, 17(3), 37.
[3] Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). The KDD process for extracting useful knowledge from volumes of data. Communications of the ACM, 39(11), 27-35.
[4] Gaber, M. M., Zaslavsky, A., & Krishnaswamy, S. (2005). Mining data streams: a review. ACM Sigmod Record, 34(2), 18-26.
[5] Kotsiantis, S. B., Kanellopoulos, D., & Pintelas, P. E. (2006). Data preprocessing for supervised leaning. International Journal of Computer Science, 1(2), 111-117.
[6] Han, J., Pei, J., & Kamber, M. (2011). Data mining: concepts and techniques. Elsevier.
[7] Little, R. J., & Rubin, D. B. (2014). Statistical analysis with missing data (Vol. 333). John Wiley & Sons.
[8] Friedl, M. A., & Brodley, C. E. (1997). Decision tree classification of land cover from remotely sensed data. Remote sensing of environment, 61(3), 399-409.
[9] Troyanskaya, O., Cantor, M., Sherlock, G., Brown, P., Hastie, T., Tibshirani, R., ... & Altman, R. B. (2001). Missing value estimation methods for DNA microarrays. Bioinformatics, 17(6), 520-525.
[10] García-Laencina, P. J., Sancho-Gómez, J. L., & Figueiras-Vidal, A. R. (2010). Pattern classification with missing data: a review. Neural Computing and Applications, 19(2), 263-282.
[11] Safavian, S. R., & Landgrebe, D. (1991). A survey of decision tree classifier methodology. IEEE transactions on systems, man, and cybernetics, 21(3), 660-674.
[12] Farhangfar, A., Kurgan, L., & Dy, J. (2008). Impact of imputation of missing values on classification error for discrete data. Pattern Recognition, 41(12), 3692-3705.
[13] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
[14] Little, R. J. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American statistical Association, 83(404), 1198-1202.
[15] Rubin, D. B. (1976). Inference and missing data. Biometrika, 63(3), 581-592.
[16] Scheffer, J. (2002). Dealing with missing data.
[17] Schafer, J. L., & Olsen, M. K. (1998). Multiple imputation for multivariate missing-data problems: A data analyst′s perspective. Multivariate behavioral research, 33(4), 545-571.
[18] Safavian, S. R., & Landgrebe, D. (1991). A survey of decision tree classifier methodology. IEEE transactions on systems, man, and cybernetics, 21(3), 660-674.
[19] Jin, C., De-Lin, L., & Fen-Xiang, M. (2009, July). An improved ID3 decision tree algorithm. In 2009 4th International Conference on Computer Science & Education (pp. 127-130). IEEE.
[20] Steinberg, D., & Colla, P. (2009). CART: classification and regression trees. The top ten algorithms in data mining, 9, 179.
[21] Schlomer, G. L., Bauman, S., & Card, N. A. (2010). Best practices for missing data management in counseling psychology. Journal of Counseling psychology, 57(1), 1.
[22] Keller, J. M., Gray, M. R., & Givens, J. A. (1985). A fuzzy k-nearest neighbor algorithm. IEEE transactions on systems, man, and cybernetics, (4), 580-585.
[23] Hastie, T., & Tibshirani, R. (1996). Discriminant adaptive nearest neighbor classification and regression. In Advances in Neural Information Processing Systems (pp. 409-415).
[24] Scholkopf, B., & Smola, A. J. (2001). Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press.
[25] Chang, C. C., & Lin, C. J. (2011). LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3), 27.
[26] Stekhoven, D. J., & Bühlmann, P. (2011). MissForest—non-parametric missing value imputation for mixed-type data. Bioinformatics, 28(1), 112-118.
[27] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436.
[28] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1985). Learning internal representations by error propagation (No. ICS-8506). California Univ San Diego La Jolla Inst for Cognitive Science.
[29] Gardner, M. W., & Dorling, S. R. (1998). Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmospheric environment, 32(14-15), 2627-2636.
[30] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
[31] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
[32] Kalchbrenner, N., Grefenstette, E., & Blunsom, P. (2014). A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188.
[33] Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818-833). Springer, Cham.
[34] Nagi, J., Ducatelle, F., Di Caro, G. A., Cireşan, D., Meier, U., Giusti, A., ... & Gambardella, L. M. (2011, November). Max-pooling convolutional neural networks for vision-based hand gesture recognition. In 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA) (pp. 342-347). IEEE.
指導教授 蔡志豐(Chih-Fong Tsai) 審核日期 2019-7-1
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明