博碩士論文 110423029 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:3 、訪客IP:3.144.36.141
姓名 吳明慧(Ming-Hui Wu)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱 探討整合機器學習及深度學習之維度精簡與分類技術於高維度結構化資料之研究
(Combining Machine Learning and Deep Learning based Dimensionality Reduction and Classification techniques in high-dimensional structured data)
相關論文
★ 利用資料探勘技術建立商用複合機銷售預測模型★ 應用資料探勘技術於資源配置預測之研究-以某電腦代工支援單位為例
★ 資料探勘技術應用於航空業航班延誤分析-以C公司為例★ 全球供應鏈下新產品的安全控管-以C公司為例
★ 資料探勘應用於半導體雷射產業-以A公司為例★ 應用資料探勘技術於空運出口貨物存倉時間預測-以A公司為例
★ 使用資料探勘分類技術優化YouBike運補作業★ 特徵屬性篩選對於不同資料類型之影響
★ 資料探勘應用於B2B網路型態之企業官網研究-以T公司為例★ 衍生性金融商品之客戶投資分析與建議-整合分群與關聯法則技術
★ 應用卷積式神經網路建立肝臟超音波影像輔助判別模型★ 基於卷積神經網路之身分識別系統
★ 能源管理系統電能補值方法誤差率比較分析★ 企業員工情感分析與管理系統之研發
★ 資料淨化於類別不平衡問題: 機器學習觀點★ 資料探勘技術應用於旅客自助報到之分析—以C航空公司為例
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-7-1以後開放)
摘要(中) 真實世界中的資料時常存有很多問題,像是含有雜訊、不相關的資料、資料量過大等等,因此在使用這些資料之前,必須先進行前處理,其中維度精簡為常見的資料前處理方法。運用維度精簡可以把重要的特徵保留起來,並減少資料維度。而集成維度精簡是指使用多種不同的維度精簡演算法,將他們所選的特徵子集透過不同方式進行融合,進而去提升維度精簡的穩健性和分類正確率。然而,近年來深度學習技術受到很大的重視,但因為其相關研究多是用來處理非結構化資料,較少有使用深度學習技術於高維度結構化資料,以及鮮少有較完整的討論於機器學習和深度學習技術的研究,因此本研究欲探討使用機器學習與深度學習為主的維度精簡和分類技術,於高維度結構化資料集。同時也想了解深度學習的效果是否能夠優於傳統機器學習,以及比較單一維度精簡和集成維度精簡的表現,以此找出較佳的維度精簡方法組合。
本研究針對二十個高維度結構化資料集,維度介於44到22283。應用機器學習與深度學習之維度精簡與分類技術,並且引用集成式學習、特徵融合的概念進行維度精簡。實驗使用五折交叉驗證,並紀錄平均正確率、平均ROC曲線下面積(Area Under Curve)、平均CPU運算時間,最後進行結果與分析,以及探討資料在不同維度間,維度精簡的優劣和推薦用法。
根據結果顯示,在本研究中,不管是維度精簡或分類器技術,使用深度學習方法表現會優於機器學方法,而使用集成式維度精簡的表現會優於單一維度精簡,其中以並列式維度精簡為最佳,最後,單一維度精簡的最佳的方法為SAE+MLP,序列式集成維度精簡方法中表現最好的方法是IG+SAE+MLP,並列式集成維度精簡則為AE+SAE(SFC)+MLP。
摘要(英) In the real world, data often presents many issues, such as noise, irrelevant information, and excessive data volume. Therefore, preprocessing is necessary before using this data. Dimensionality reduction is a common data preprocessing method that aims to retain important features and reduce data dimensionality. Ensemble dimensionality reduction refers to the use of multiple different dimensionality reduction algorithms, and combining their selected subsets of features in different ways. Through ensemble techniques, the robustness and classification accuracy of dimensionality reduction can be improved. In recent years, deep learning techniques have received significant attention. However, most of the related research has focused on handling unstructured data, with limited studies on the use of deep learning techniques for high-dimensional structured data, and a lack of comprehensive discussions on machine learning and deep learning techniques. Therefore, this study aims to investigate dimensionality reduction and classification techniques based on machine learning and deep learning, specifically for high-dimensional structured datasets. It also aims to understand whether deep learning can outperform traditional machine learning methods, and compare the performance of single dimensionality reduction and ensemble dimensionality reduction methods to identify optimal combinations of dimensionality reduction techniques.
This study focuses on twenty high-dimensional structured datasets, ranging from 44 to 22,283 dimensions. Machine learning and deep learning-based dimensionality reduction and classification techniques are applied, incorporating ensemble learning and feature fusion concepts for dimensionality reduction. The experiments use five-fold cross-validation and record average accuracy, average area under curve, and average CPU time. Finally, the results are analyzed to evaluate the advantages and recommendations of dimensionality reduction across different dimensions.
According to the experimental results of this study, both dimensionality reduction and classifier techniques using deep learning methods outperform machine learning methods. Ensemble dimensionality reduction outperforms single dimensionality reduction, with parallel dimensionality reduction being the best approach. Finally, the best single dimensionality reduction method is found to be SAE+MLP, and the best performing method among sequential ensemble dimensionality reduction approaches is IG+SAE+MLP, while AE+SAE(SFC)+MLP is the preferred approach for parallel ensemble dimensionality reduction.
關鍵字(中) ★ 資料探勘
★ 機器學習
★ 深度學習
★ 維度精簡
★ 集成學習
★ 自編碼器
關鍵字(英)
論文目次 摘要 i
Abstract ii
目錄 iii
圖目錄 vi
表目錄 viii
附錄目錄 ix
第一章 緒論 1
1.1研究背景 1
1.2研究動機 2
1.3研究目的 3
1.4研究架構 4
第二章 文獻探討 5
2.1機器學習特徵選取演算法 5
2.1.1 基因演算法(Genetic Algorithm, GA) 7
2.1.2 資訊獲利(Information Gain, IG) 8
2.1.3 決策樹C4.5(Decision Tree C4.5, DT) 8
2.2深度學習特徵萃取演算法 9
2.2.1 自編碼器AE(AutoEncoder, AE) 10
2.2.2 稀疏自編碼器SAE(Sparse AutoEncoder, SAE) 11
2.2.3 降噪自編碼器DAE(Denoising AutoEncoder, DAE) 12
2.2.4 變分自動編碼器VAE(Variational Autoencoder, VAE) 12
2.3機器學習分類器演算法 13
2.3.1支援向量機(Support Vector Machine, SVM) 13
2.3.2 K-近鄰演算法(K Nearest Neighbor, KNN) 14
2.4深度學習分類器演算法 15
2.4.1 深度多層感知器(Deep multilayer perceptron,MLP) 15
2.4.2 深度信念網路(Deep belief networks,DBN) 16
2.5 集成式學習(Ensemble learning) 16
2.5.1 序列式集成(Sequential ensemble) 17
2.5.2 並列式集成(Parallel ensemble) 17
2.5.3 特徵融合(Feature Fusion) 18
2.5.3.1 機器學習技術 18
2.5.3.2 深度學習技術 19
第三章 研究方法 21
3.1 實驗架構 21
3.2 實驗準備 22
3.2.1 實驗電腦環境 22
3.2.2 實驗資料集 22
3.2.3 實驗參數設定 24
3.2.3.1 機器學習特徵選取之參數 24
3.2.3.2 機器學習分類器之參數 25
3.2.3.3 深度學習特徵選取之參數 25
3.2.3.4 深度學習分類器之參數 30
3.3 實驗一 31
3.3.1 Baseline 31
3.3.2 單一維度精簡 31
3.4 實驗二 33
3.4.1 序列式集成維度精簡 33
3.4.1.1 同質性集成 33
3.4.1.2 異質性集成 34
3.4.2 並列式集成維度精簡 35
3.5 實驗驗證準則及評估指標 37
3.5.1 實驗驗證準則 37
3.5.2 實驗評估指標 38
第四章 實驗結果 40
4.1實驗一結果 40
4.1.1 Baseline與單一維度精簡 40
4.1.1.1 分類正確率(Accuracy) 40
4.1.1.2 ROC曲線下面積(AUC) 44
4.1.1.3 CPU運算時間 47
4.1.1.4 維度精簡比率 51
4.1.2實驗一小結 51
4.2實驗二結果 53
4.2.2 Baseline與集成維度精簡 54
4.2.2.1 分類正確率(Accuracy) 54
4.2.2.2 ROC曲線下面積(AUC) 58
4.2.2.3 CPU運算時間 62
4.2.2.4 維度精簡比率 65
4.2.2實驗二小結 66
4.3分析與討論 69
第五章 結論 71
5.1結論與貢獻 71
5.2未來研究方向與建議 72
參考文獻 74
附錄 80
實驗一結果 80
實驗二結果 93
參考文獻 [1] Dobre, C., & Xhafa, F. (2014). Intelligent services for Big Data science. Future Generation Computer Systems, 37, 267–281. https://doi.org/10.1016/j.future.2013.07.014
[2] Bello-Orgaz, G., Jung, J. J., & Camacho, D. (2016). Social big data: Recent achievements and new challenges. Information Fusion, 28, 45–59. https://doi.org/10.1016/j.inffus.2015.08.005
[3] Raghupathi, W., & Raghupathi, V. (2014). Big data analytics in healthcare: Promise and potential. Health Information Science and Systems, 2, 3. https://doi.org/10.1186/2047-2501-2-3
[4] Xu, L. D., & Duan, L. (2019). Big data for cyber physical systems in industry 4.0: A survey. Enterprise Information Systems, 13(2), 148–169. https://doi.org/10.1080/17517575.2018.1442934
[5] Chen, H., Chiang, R. H. L., & Storey, V. C. (2012). Business Intelligence and Analytics: From Big Data to Big Impact. MIS Quarterly, 36(4), 1165–1188. https://doi.org/10.2307/41703503
[6] Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From Data Mining to Knowledge Discovery in Databases. AI Magazine, 17(3), Article 3. https://doi.org/10.1609/aimag.v17i3.1230
[7] Munson, M. A. (2012). A study on the importance of and time spent on different modeling steps. ACM SIGKDD Explorations Newsletter, 13(2), 65–71. https://doi.org/10.1145/2207243.2207253
[8] Famili, A., Shen, W.-M., Weber, R., & Simoudis, E. (1997). Data preprocessing and intelligent data analysis. Intelligent Data Analysis, 1(1), 3–23. https://doi.org/10.1016/S1088-467X(98)00007-9
[9] Kotsiantis, S., Kanellopoulos, D., & Pintelas, P. (2007). Data Preprocessing for Supervised Leaning. World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering. https://www.semanticscholar.org/paper/Data-Preprocessing-for-Supervised-Leaning-Kotsiantis-Kanellopoulos/346675f9236b7f3d4873d1403a7aca0b0d78e589
[10] de Noord, O. E. (1994). The influence of data preprocessing on the robustness and parsimony of multivariate calibration models. Chemometrics and Intelligent Laboratory Systems, 23(1), 65–70. https://doi.org/10.1016/0169-7439(93)E0065-C
[11] Gomez-Uribe, C. A., & Hunt, N. (2016). The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Transactions on Management Information Systems, 6(4), 13:1-13:19. https://doi.org/10.1145/2843948
[12] Yu, L., & Liu, H. (2003). Feature Selection for High-Dimensional Data: Proceedings, Twentieth International Conference on Machine Learning. Proceedings, Twentieth International Conference on Machine Learning, 856–863.
[13] Zhai, Y., Ong, Y.-S., & Tsang, I. W. (2014). The Emerging「Big Dimensionality」. IEEE Computational Intelligence Magazine, 9(3), 14–26. https://doi.org/10.1109/MCI.2014.2326099
[14] Najafabadi, M. M., Villanustre, F., Khoshgoftaar, T. M., Seliya, N., Wald, R., & Muharemagic, E. (2015). Deep learning applications and challenges in big data analytics. Journal of Big Data, 2(1), 1. https://doi.org/10.1186/s40537-014-0007-7
[15] LeCun, Y., Bengio, Y. & Hinton, G. (2015).Deep learning. Nature 521, 436–444. https://doi.org/10.1038/nature14539
[16] Zhang, Q., Yang, L. T., Chen, Z., & Li, P. (2018). A survey on deep learning for big data. Information Fusion, 42, 146–157. https://doi.org/10.1016/j.inffus.2017.10.006
[17] Charte, D., Charte, F., García, S., del Jesus, M. J., & Herrera, F. (2018). A practical tutorial on autoencoders for nonlinear feature fusion: Taxonomy, models, software and guidelines. Information Fusion, 44, 78–96. https://doi.org/10.1016/j.inffus.2017.12.007
[18] Ng, A. (2011). Sparse autoencoder. CS294A Lecture notes, 72(2011), 1-19.
[19] Sagi, O., & Rokach, L. (2018). Ensemble learning: A survey. WIREs Data Mining and Knowledge Discovery, 8(4), e1249. https://doi.org/10.1002/widm.1249
[20] Tsai, C.-F., & Sung, Y.-T. (2020). Ensemble feature selection in high dimension, low sample size datasets: Parallel and serial combination approaches. Knowledge-Based Systems, 203, 106097. https://doi.org/10.1016/j.knosys.2020.106097
[21] Bermingham, M. L., Pong-Wong, R., Spiliopoulou, A., Hayward, C., Rudan, I., Campbell, H., Wright, A. F., Wilson, J. F., Agakov, F., Navarro, P., & Haley, C. S. (2015). Application of high-dimensional feature selection: Evaluation for genomic prediction in man. Scientific Reports, 5(1), Article 1. https://doi.org/10.1038/srep10312
[22] Chandrashekar, G., & Sahin, F. (2014). A survey on feature selection methods. Computers & Electrical Engineering, 40(1), 16–28. https://doi.org/10.1016/j.compeleceng.2013.11.024
[23] Hira, Z. M., & Gillies, D. F. (2015). A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data. Advances in Bioinformatics, 2015, e198363. https://doi.org/10.1155/2015/198363
[24] Jain, A., & Zongker, D. (1997). Feature selection: Evaluation, application, and small sample performance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(2), 153–158. https://doi.org/10.1109/34.574797
[25] Dash, M., & Liu, H. (1997). Feature selection for classification. Intelligent Data Analysis, 1(1), 131–156. https://doi.org/10.1016/S1088-467X(97)00008-5
[26] Wang, S., Zhang, Y., Zhan, T., Phillips, P., Zhang, Y., Liu, G., Lu, S., & Wu, X. (2016). PATHOLOGICAL BRAIN DETECTION BY ARTIFICIAL INTELLIGENCE IN MAGNETIC RESONANCE IMAGING SCANNING (INVITED REVIEW). Progress In Electromagnetics Research, 156, 105–133. https://doi.org/10.2528/PIER16070801
[27] Kohavi, R., & John, G. H. (1997). Wrappers for feature subset selection. Artificial Intelligence, 97(1), 273–324. https://doi.org/10.1016/S0004-3702(97)00043-X
[28] S. Krishnan, G., & S., S. K. (2019). A novel GA-ELM model for patient-specific mortality prediction over large-scale lab event data. Applied Soft Computing, 80, 525–533. https://doi.org/10.1016/j.asoc.2019.04.019
[29] Karegowda, A. G., Manjunath, A. S., & Jayaram, M. A. (2010). Feature Subset Selection Problem using Wrapper Approach in Supervised Learning. International Journal of Computer Applications, 1(7), 13–17. https://doi.org/10.5120/169-295
[30] Saeys, Y., Inza, I., & Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19), 2507–2517. https://doi.org/10.1093/bioinformatics/btm344
[31] Zhu, M., & Song, J. (2013). An Embedded Backward Feature Selection Method for MCLP Classification Algorithm. Procedia Computer Science, 17, 1047–1054. https://doi.org/10.1016/j.procs.2013.05.133
[32] M,P.(2021). Feature Selection Methods: A Study. vol. 12, pp. 371–377.
[33] Kumar, V. (2014). Feature Selection: A literature Review. The Smart Computing Review, 4(3). https://doi.org/10.6029/smartcr.2014.03.007
[34] Holland, J. H. (1992). Genetic Algorithms. Scientific American, 267(1), 66–73. http://www.jstor.org/stable/24939139
[35] Cateni, S., Vannucci, M., Vannocci, M., & Coll, V. (2013). Variable Selection and Feature Extraction Through Artificial Intelligence Techniques. In L. Freitas (Ed.), Multivariate Analysis in Management, Engineering and the Sciences. InTech. https://doi.org/10.5772/53862
[36] Chtioui, Y., Bertrand, D., & Barba, D. (1998). Feature selection by a genetic algorithm. Application to seed discrimination by artificial vision. Journal of the Science of Food and Agriculture, 76(1), 77–86. https://doi.org/10.1002/(SICI)1097-0010(199801)76:1<77::AID-JSFA948>3.0.CO;2-9
[37] Bommert, A., Sun, X., Bischl, B., Rahnenführer, J., & Lang, M. (2020). Benchmark for filter methods for feature selection in high-dimensional classification data. Computational Statistics & Data Analysis, 143, 106839. https://doi.org/10.1016/j.csda.2019.106839
[38] Quinlan, J. R. (2014). C4.5: Programs for machine learning. Morgan Kaufmann Publishers.
[39] Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1(1), 81–106. https://doi.org/10.1007/BF00116251
[40] Han, J., Pei, J., & Tong, H. (2022). Data mining: Concepts and techniques (Fourth edition). Morgan Kaufmann.
[41] Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the Dimensionality of Data with Neural Networks. Science, 313(5786), 504–507. https://doi.org/10.1126/science.1127647
[42] Pratella, D., Ait-El-Mkadem Saadi, S., Bannwarth, S., Paquis-Fluckinger, V., & Bottini, S. (2021). A Survey of Autoencoder Algorithms to Pave the Diagnosis of Rare Diseases. International Journal of Molecular Sciences, 22(19), 10891. https://doi.org/10.3390/ijms221910891
[43] Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P.-A. (2008). Extracting and composing robust features with denoising autoencoders. Proceedings of the 25th International Conference on Machine Learning, 1096–1103. https://doi.org/10.1145/1390156.1390294
[44] Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
[45] Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255–260. https://doi.org/10.1126/science.aaa8415
[46] Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297. https://doi.org/10.1007/BF00994018
[47] Fix, E., & Hodges, J. L. (1989). Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties. International Statistical Review / Revue Internationale de Statistique, 57(3), 238–247. https://doi.org/10.2307/1403797
[48] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
[49] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84–90. https://doi.org/10.1145/3065386
[50] Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent Trends in Deep Learning Based Natural Language Processing [Review Article]. IEEE Computational Intelligence Magazine, 13(3), 55–75. https://doi.org/10.1109/MCI.2018.2840738
[51] Xiong, H. Y., Alipanahi, B., Lee, L. J., Bretschneider, H., Merico, D., Yuen, R. K. C., Hua, Y., Gueroussov, S., Najafabadi, H. S., Hughes, T. R., Morris, Q., Barash, Y., Krainer, A. R., Jojic, N., Scherer, S. W., Blencowe, B. J., & Frey, B. J. (2015). The human splicing code reveals new insights into the genetic determinants of disease. Science, 347(6218), 1254806. https://doi.org/10.1126/science.1254806
[52] Gardner, M. W., & Dorling, S. R. (1998). Artificial neural networks (the multilayer perceptron)—A review of applications in the atmospheric sciences. Atmospheric Environment, 32(14), 2627–2636. https://doi.org/10.1016/S1352-2310(97)00447-0
[53] Hinton, G. E., Osindero, S., & Teh, Y.-W. (2006). A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 18(7), 1527–1554. https://doi.org/10.1162/neco.2006.18.7.1527
[54] Dietterich, T. G. (2000). Ensemble Methods in Machine Learning. Multiple Classifier Systems, 1–15. https://doi.org/10.1007/3-540-45014-9_1
[55] Zhou, Z.-H. (2015). Ensemble Learning. In S. Z. Li & A. K. Jain (Eds.), Encyclopedia of Biometrics (pp. 411–416). Springer US. https://doi.org/10.1007/978-1-4899-7488-4_293
[56] Rayana, S., Zhong, W., & Akoglu, L. (2016). Sequential Ensemble Learning for Outlier Detection: A Bias-Variance Perspective. 2016 IEEE 16th International Conference on Data Mining (ICDM), 1167–1172. https://doi.org/10.1109/ICDM.2016.0154
[57] Bühlmann, P. (2012). Bagging, Boosting and Ensemble Methods. In J. E. Gentle, W. K. Härdle, & Y. Mori (Eds.), Handbook of Computational Statistics: Concepts and Methods (pp. 985–1022). Springer. https://doi.org/10.1007/978-3-642-21551-3_33
[58] Sharkey, A. J. C. (1999). Linear and Order Statistics Combiners for Pattern Classification. In A. J. C. Sharkey (Ed.), Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems (pp. 127–161). Springer. https://doi.org/10.1007/978-1-4471-0793-4_6
[59] Sun, Q.-S., Zeng, S.-G., Liu, Y., Heng, P.-A., & Xia, D.-S. (2005). A new method of feature fusion and its application in image recognition. Pattern Recognition, 38(12), 2437–2448. https://doi.org/10.1016/j.patcog.2004.12.013
[60] Abeel, T., Helleputte, T., Van de Peer, Y., Dupont, P., & Saeys, Y. (2010). Robust biomarker identification for cancer diagnosis with ensemble feature selection methods. Bioinformatics, 26(3), 392–398. https://doi.org/10.1093/bioinformatics/btp630
[61] Termenon, M., & Graña, M. (2012). A Two Stage Sequential Ensemble Applied to the Classification of Alzheimer’s Disease Based on MRI Features. Neural Processing Letters, 35(1), 1–12. https://doi.org/10.1007/s11063-011-9200-2
[62] Tsai, C.-F., & Hsiao, Y.-C. (2010). Combining multiple feature selection methods for stock prediction: Union, intersection, and multi-intersection approaches. Decision Support Systems, 50(1), 258–269. https://doi.org/10.1016/j.dss.2010.08.028
[63] Yan, X., Hu, S., Mao, Y., Ye, Y., & Yu, H. (2021). Deep multi-view learning methods: A review. Neurocomputing, 448, 106–129. https://doi.org/10.1016/j.neucom.2021.03.090
[64] Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., & Ng, A. Y. (2011). Multimodal deep learning. Proceedings of the 28th International Conference on International Conference on Machine Learning, 689–696.
[65] Baltrusaitis, T., Ahuja, C., & Morency, L.-P. (2019). Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 423–443. https://doi.org/10.1109/TPAMI.2018.2798607
[66] Zhang, C., Yang, Z., He, X., & Deng, L. (2020). Multimodal Intelligence: Representation Learning, Information Fusion, and Applications. IEEE Journal of Selected Topics in Signal Processing, 14(3), 478–493. https://doi.org/10.1109/JSTSP.2020.2987728
[67] Grefenstette, J. J. (1986). Optimization of Control Parameters for Genetic Algorithms. IEEE Transactions on Systems, Man, and Cybernetics, 16(1), 122–128. https://doi.org/10.1109/TSMC.1986.289288
[68] Venkatachalam, A. (2007). M-infosift: A Graph-based Approach For Multiclass Document Classification. https://rc.library.uta.edu/uta-ir/handle/10106/612
[69] Yan, B., & Han, G. (2018). Effective Feature Extraction via Stacked Sparse Autoencoder to Improve Intrusion Detection System. IEEE Access, 6, 41238–41248. https://doi.org/10.1109/ACCESS.2018.2858277
[70] Guyon, I., Gunn, S., Nikravesh, M., & Zadeh, L. A. (Eds.). (2008). Feature extraction: foundations and applications (Vol. 207). Springer.
[71] Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27(8), 861–874. https://doi.org/10.1016/j.patrec.2005.10.010
指導教授 蔡志豐(Chih-Fong Tsai) 審核日期 2023-7-18
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明