博碩士論文 110423014 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:72 、訪客IP:18.117.8.6
姓名 謝鎮安(Chen-An Hsieh)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱 基於相關性與自動編碼器的同質集成與二階段特徵選擇
相關論文
★ 具代理人之行動匿名拍賣與付款機制★ 網路攝影機遠端連線安全性分析
★ HSDPA環境下的複合式細胞切換機制★ 樹狀結構為基礎之行動隨意網路IP位址分配機制
★ 平面環境中目標區域之偵測 - 使用行動感測網路技術★ 藍芽Scatternet上的P2P檔案分享機制
★ 交通壅塞避免之動態繞路機制★ 運用UWB提升MANET上檔案分享之效能
★ 合作學習平台對團體迷思現象及學習成效之影響–以英文字彙學習為例★ 以RFID為基礎的室內定位機制─使用虛擬標籤的經驗法則
★ 適用於實體購物情境的行動商品比價系統-使用影像辨識技術★ 信用卡網路刷卡安全性
★ DEAP:適用於行動RFID系統之高效能動態認證協定★ 在破產預測與信用評估領域對前處理方式與分類器組合的比較分析
★ 單一類別分類方法於不平衡資料集-搭配遺漏值填補和樣本選取方法★ 正規化與變數篩選在破產領域的適用性研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-6-30以後開放)
摘要(中) 本研究旨在將自動編碼器特徵選擇應用於監督式任務,研究該方法與相關性特徵選擇在預測性能和穩定性方面的表現,並進一步分析同質集成架構與本研究提出的二階段結合架構對特徵選擇效能的影響,以建立更好的特徵選擇方法。
本研究建構了基於Gedeon方法的自動編碼器特徵選擇,並與Impurity、Anova、ReliefF和Mutual Information四種相關性特徵選擇進行比較。實驗結果顯示,自動編碼器特徵選擇在沒有使用架構改進的情況下表現不佳。
在同質集成實驗中,相關性特徵選擇能透過犧牲少量的預測性能換取更好的穩定性,使其在整體表現上更好;自動編碼器特徵選擇透過同質集成架構能獲得穩定性與預測性能上的提升,並在預測性能上贏過相關性特徵選擇。在二階段實驗中,以自動編碼器特徵選擇作為第一階段的方法是最佳的結合順序。透過結合兩種不同評估方式的特徵選擇方法,在預測性能上優於未集成與同質集成的所有特徵選擇方法。
根據實驗結果,本研究建議在進行特徵選擇時,應根據不同應用情境選擇同質集成或二階段結合架構,來提升特徵選擇的整體效能。同質集成著重於提升穩定性,而二階段結合則能有效提升預測性能,並透過對前後兩個階段的特徵選擇使用同質集成來保持良好的穩定性。
摘要(英) This study aims to apply autoencoder feature selection to supervised tasks, investigate its prediction performance and stability compared to relevance feature selection, and further ana-lyze the impact of homogeneous ensemble and the proposed two-phase combination on feature selection effectiveness to establish a better feature selection method.
We constructed an autoencoder feature selection method based on the Gedeon method and compared it with four relevance feature selection methods: Impurity, Anova, ReliefF, and Mutu-al Information. The experimental results showed that the autoencoder feature selection per-formed poorly without architectural improvements.
In the homogeneous ensemble experiment, relevance feature selection achieved better overall evaluation by sacrificing a small amount of prediction performance in exchange for im-proved stability. The autoencoder feature selection improved stability and prediction perfor-mance, outperforming relevance feature selection in prediction performance. In the two-phase combination, using autoencoder feature selection as the first-phase is the optimal combination order. Combining two different evaluation feature selections in this order, it outperforms all non-ensemble and homogeneous ensemble feature selection methods in prediction performance.
Based on the experimental results, this study suggests that feature selection should be cho-sen based on different application scenarios, either using a homogeneous ensemble or the two-phase combination, to enhance the effectiveness of feature selection. The homogeneous ensemble focuses on improving stability. In contrast, the two-phase combination effectively im-proves prediction performance and maintains good stability by applying a homogeneous en-semble to the feature selection in both phases.
關鍵字(中) ★ 特徵選擇
★ 高維資料集
★ 自動編碼器
★ 集成學習
★ 穩定性
關鍵字(英) ★ Feature selection
★ High-dimensional dataset
★ Autoencoder
★ Ensemble learning
★ Stability
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vii
表目錄 ix
一、緒論 1
1-1  研究背景 1
1-2  研究動機 2
1-3  研究目的 4
二、文獻探討 5
2-1  特徵選擇類型 5
2-2  過濾器式相關性特徵選擇 7
2-3  自動編碼器特徵選擇 11
2-4  激勵函數 13
2-5  穩定性 15
2-6  集成學習 15
2-7  分類器 17
三、研究方法 18
3-1  研究資料集 19
3-2  資料前處理 20
3-3  實驗參數設定、方法 21
3-3-1 實驗環境設計 21
3-3-2 自動編碼器特徵選擇設計 22
3-4  評估指標 24
3-5  過濾器式特徵選擇實驗流程 27
3-6  自動編碼器特徵選擇的適用性 28
3-6-1 激勵函數對於自動編碼器特徵選擇的影響 28
3-6-2 探討自動編碼器特徵選擇與相關性特徵選擇的效能 30
3-7  探討同質集成對特徵選擇的影響 30
3-8  探討二階段結合對於特徵選擇的影響 32
四、實驗結果與分析 34
4-1  探討不同激勵函數對自動編碼器特徵選擇的效能影響 34
4-2  自動編碼器特徵選擇與相關性特徵選擇的適用性分析 35
4-2-1  特徵選擇在不同閾值下的效能 36
4-2-2  資料集I/F類型對特徵選擇的適用性影響 39
4-3  同質集成特徵選擇的適用性分析 40
4-3-1 抽樣比例與集成次數對特徵選擇的效能影響 40
4-3-2 同質集成特徵選擇在不同閾值下的效能 51
4-3-3 資料集I/F類型對同質集成的影響 54
4-4  基於自動編碼器與相關性的二階段特徵選擇 56
4-4-1 相關性強化的自動編碼器特徵選擇在不同閾值下的效能 57
4-4-2 自動編碼器強化的相關性特徵選擇在不同閾值下的效能 60
4-4-3 兩種結合順序的適用性分析 62
4-4-4 最佳順序結合未集成相關性特徵選擇的效能 63
4-5  分析同質集成特徵選擇與二階段特徵選擇的適用性 64
4-5-1 分析兩種架構的預測性能與子集穩定性 65
4-5-2 兩種架構在不同資料集I/F類型的適用性分析 66
五、結論 68
5-1  結論與貢獻 68
5-2  研究限制 69
5-3  未來研究與建議 69
參考文獻 71
參考文獻 [1] S. Ayesha, M. K. Hanif, and R. Talib, "Overview and comparative study of dimensionality reduction techniques for high dimensional data," Information Fusion, vol. 59, pp. 44-58, 2020
[2] S. Chen, J. Montgomery, and A. Bolufé-Röhler, "Measuring the curse of dimensionality and its effects on particle swarm optimization and differential evolution," Applied Intelligence, vol. 42, no. 3, pp. 514-526, 2015
[3] S. Wold, K. Esbensen, and P. Geladi, "Principal component analysis," Chemometrics and Intelligent Laboratory Systems, vol. 2, no. 1-3, pp. 37-52, 1987
[4] F. Shaheen, B. Verma, and M. Asafuddoula, "Impact of Automatic Feature Extraction in Deep Learning Architecture," Proceedings of 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016
[5] Y. Yang and J. O. Pedersen, "A comparative study on feature selection in text categorization," Proceedings of the Fourteenth International Conference on Machine Learning, vol. 97, no. 412-420, 1997
[6] J. Cai, J. Luo, S. Wang, and S. Yang, "Feature selection in machine learning: A new perspective," Neurocomputing, vol. 300, pp. 70-79, 2018
[7] R. Zebari, A. Abdulazeez, D. Zeebaree, D. Zebari, and J. Saeed, "A Comprehensive Review of Dimensionality Reduction Techniques for Feature Selection and Feature Extraction," Journal of Applied Science and Technology Trends, vol. 1, no. 2, pp. 56-70, 2020
[8] M. Clinciu and H. Hastie, "A survey of explainable AI terminology," Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019), 2019
[9] U. M. Khaire and R. Dhanalakshmi, "Stability of feature selection algorithm: A review," Journal of King Saud University - Computer and Information Sciences, vol. 34, no. 4, pp. 1060-1073, 2022
[10] L. Floridi, "Establishing the rules for building trustworthy AI," Nature Machine Intelligence, vol. 1, no. 6, pp. 261-262, 2019
[11] J. C. Ang, A. Mirzal, H. Haron, and H. N. A. Hamed, "Supervised, Unsupervised, and Semi-Supervised Feature Selection: A Review on Gene Selection," IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 13, no. 5, pp. 971-989, 2016
[12] L. Yu and H. Liu, "Efficient feature selection via analysis of relevance and redundancy," The Journal of Machine Learning Research, vol. 5, pp. 1205-1224, 2004
[13] J. Yang, Y. L. Liu, C. S. Feng, and G. Q. Zhu, "Applying the Fisher score to identify Alzheimer′s disease-related genes," (in eng), Genet Mol Res, vol. 15, no. 2, 2016
[14] L. Wang, Q. Mo, and J. Wang, "MIrExpress: A Database for Gene Coexpression Correlation in Immune Cells Based on Mutual Information and Pearson Correlation," Journal of Immunology Research, vol. 2015, p. 140819, 2015
[15] X. Jin, A. Xu, R. Bie, and P. Guo, "Machine Learning Techniques and Chi-Square Feature Selection for Cancer Classification Using SAGE Gene Expression Profiles," Proceedings of Data Mining for Biomedical Applications, Berlin, Heidelberg, J. Li, Q. Yang, and A.-H. Tan, Eds., 2006
[16] X. He, D. Cai, and P. Niyogi, "Laplacian score for feature selection," Advances in Neural Information Processing Systems, vol. 18, 2005
[17] D. Cai, C. Zhang, and X. He, "Unsupervised feature selection for multi-cluster data," Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2010
[18] Y. Yang, H. T. Shen, Z. Ma, Z. Huang, and X. Zhou, "L2, 1-norm regularized discriminative feature selection for unsupervised," Proceedings of Twenty-second International Joint Conference on Artificial Intelligence, 2011
[19] P. Zhu, W. Zuo, L. Zhang, Q. Hu, and S. C. Shiu, "Unsupervised feature selection by regularized self-representation," Pattern Recognition, vol. 48, no. 2, pp. 438-446, 2015
[20] K. Han, Y. Wang, C. Zhang, C. Li, and C. Xu, "Autoencoder inspired unsupervised feature selection," Proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018
[21] Y. Wang, H. Yao, and S. Zhao, "Auto-encoder based dimensionality reduction," Neurocomputing, vol. 184, pp. 232-242, 2016
[22] Z. Atashgahi et al., "Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders," Machine Learning, pp. 1-38, 2022
[23] M. F. Balın, A. Abid, and J. Zou, "Concrete autoencoders: Differentiable feature selection and reconstruction," Proceedings of International Conference on Machine Learning, 2019
[24] X. Xu, H. Gu, Y. Wang, J. Wang, and P. Qin, "Autoencoder Based Feature Selection Method for Classification of Anticancer Drug Response," Front Genet, vol. 10, p. 233, 2019
[25] B. Venkatesh and J. Anuradha, "A hybrid feature selection approach for handling a high-dimensional data," Proceedings of Innovations in Computer Science and Engineering: Proceedings of the Sixth ICICSE 2018, 2019
[26] Z. Huang, C. Yang, X. Zhou, and T. Huang, "A hybrid feature selection method based on binary state transition algorithm and ReliefF," IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 5, pp. 1888-1898, 2018
[27] J. Apolloni, G. Leguizamón, and E. Alba, "Two hybrid wrapper-filter feature selection algorithms applied to high-dimensional microarray experiments," Applied Soft Computing, vol. 38, pp. 922-932, 2016
[28] X. Zhao et al., "A two-stage feature selection method with its application," Computers & Electrical Engineering, vol. 47, pp. 114-125, 2015
[29] A. K. Shukla, P. Singh, and M. Vardhan, "A two-stage gene selection method for biomarker discovery from microarray data for cancer classification," Chemometrics and Intelligent Laboratory Systems, vol. 183, pp. 47-58, 2018
[30] C.-F. Tsai and Y.-T. Sung, "Ensemble feature selection in high dimension, low sample size datasets: Parallel and serial combination approaches," Knowledge-Based Systems, vol. 203, p. 106097, 2020
[31] A. Ghorbani, A. Abid, and J. Zou, "Interpretation of neural networks is fragile," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019
[32] Y. Saeys, T. Abeel, and Y. Van de Peer, "Robust feature selection using ensemble feature selection techniques," Proceedings of Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2008, Berlin, Heidelberg, 2008
[33] A. Jović, K. Brkić, and N. Bogunović, "A review of feature selection methods with applications," Proceedings of 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015
[34] H. Sanz, C. Valim, E. Vegas, J. M. Oller, and F. Reverter, "SVM-RFE: selection and visualization of the most relevant features through non-linear kernels," BMC Bioinformatics, vol. 19, no. 1, p. 432, 2018
[35] D. Jain and V. Singh, "Feature selection and classification systems for chronic disease prediction: A review," Egyptian Informatics Journal, vol. 19, no. 3, pp. 179-189, 2018
[36] M. Kurniawan, S. Yazid, and Y. G. Sucahyo, "Comparison of Feature Selection Methods for DDoS Attacks on Software Defined Networks using Filter-Based, Wrapper-Based and Embedded-Based," JOIV: International Journal on Informatics Visualization, vol. 6, no. 4, pp. 809-814, 2022
[37] M. Dash and H. Liu, "Feature selection for classification," Intelligent Data Analysis, vol. 1, no. 1-4, pp. 131-156, 1997
[38] H. Zhou, J. Zhang, Y. Zhou, X. Guo, and Y. Ma, "A feature selection algorithm of decision tree based on feature weight," Expert Systems with Applications, vol. 164, p. 113842, 2021
[39] B. Xue, M. Zhang, and W. N. Browne, "Particle swarm optimization for feature selection in classification: A multi-objective approach," IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1656-1671, 2012
[40] Q. Al-Tashi, S. J. A. Kadir, H. M. Rais, S. Mirjalili, and H. Alhussian, "Binary optimization using hybrid grey wolf optimization for feature selection," IEEE Access, vol. 7, pp. 39496-39508, 2019
[41] J. R. Quinlan, "C4. 5: Programming for machine learning," Morgan Kauffmann, vol. 38, no. 48, p. 49, 1993
[42] R. Tibshirani, "Regression shrinkage and selection via the lasso," Journal of the Royal Statistical Society: Series B (Methodological), vol. 58, no. 1, pp. 267-288, 1996
[43] H. Zou and T. Hastie, "Regularization and variable selection via the elastic net," Journal of the Royal Statistical Society: series B (statistical methodology), vol. 67, no. 2, pp. 301-320, 2005
[44] A. Bommert, X. Sun, B. Bischl, J. Rahnenführer, and M. Lang, "Benchmark for filter methods for feature selection in high-dimensional classification data," Computational Statistics & Data Analysis, vol. 143, 2020
[45] A. Bommert, T. Welchowski, M. Schmid, and J. Rahnenführer, "Benchmark of filter methods for feature selection in high-dimensional gene expression survival data," Briefings in Bioinformatics, vol. 23, no. 1, p. bbab354, 2022
[46] H. Ding, P.-M. Feng, W. Chen, and H. Lin, "Identification of bacteriophage virion proteins by the ANOVA feature selection and analysis," Molecular BioSystems, vol. 10, no. 8, pp. 2229-2235, 2014
[47] I. Kononenko and M. R. Sˇikonja, "Non-myopic feature quality evaluation with (R) ReliefF," in Computational methods of feature selection: Chapman and Hall/CRC, 2007
[48] K. Kira and L. A. Rendell, "A practical approach to feature selection," in Machine learning proceedings: Elsevier, 1992
[49] I. Kononenko, "Estimating attributes: Analysis and extensions of RELIEF," Proceedings of European Conference on Machine Learning, 1994
[50] C. S. Greene, N. M. Penrod, J. Kiralis, and J. H. Moore, "Spatially uniform relieff (SURF) for computationally-efficient filtering of gene-gene interactions," BioData mining, vol. 2, pp. 1-9, 2009
[51] J. Zhai, S. Zhang, J. Chen, and Q. He, "Autoencoder and its various variants," Proceedings of 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2018
[52] M. Elbattah, C. Loughnane, J.-L. Guérin, R. Carette, F. Cilia, and G. Dequen, "Variational Autoencoder for Image-Based Augmentation of Eye-Tracking Data," Journal of Imaging, vol. 7, no. 5, p. 83, 2021
[53] Y. N. Kunang, S. Nurmaini, D. Stiawan, and A. Zarkasi, "Automatic features extraction using autoencoder in intrusion detection system," Proceedings of 2018 International Conference on Electrical Engineering and Computer Science (ICECOS), 2018
[54] S. Sharifipour, H. Fayyazi, M. Sabokrou, and E. Adeli, "Unsupervised Feature Ranking and Selection Based on Autoencoders," Proceedings of ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019
[55] D. Singh, H. Climente-González, M. Petrovich, E. Kawakami, and M. Yamada, "Fsnet: Feature selection network on high-dimensional biological data," arXiv preprint arXiv:2001.08322, 2020
[56] T. D. Gedeon, "Data mining of inputs: analysing magnitude and functional measures," (in eng), Int J Neural Syst, vol. 8, no. 2, pp. 209-18, 1997
[57] S. Sharma, S. Sharma, and A. Athaiya, "Activation functions in neural networks," Towards Data Science, vol. 6, no. 12, pp. 310-316, 2017
[58] A. L. Maas, A. Y. Hannun, and A. Y. Ng, "Rectifier nonlinearities improve neural network acoustic models," Proceedings of International Conference on Machine Learning, vol. 30, no. 1, 2013
[59] P. Ramachandran, B. Zoph, and Q. V. Le, "Searching for activation functions," arXiv preprint arXiv:1710.05941, 2017
[60] J. Feng and S. Lu, "Performance analysis of various activation functions in artificial neural networks," Proceedings of Journal of Physics: Conference Series, vol. 1237, no. 2, 2019
[61] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, "Fast and accurate deep network learning by exponential linear units (elus)," arXiv preprint arXiv:1511.07289, 2015
[62] V. Bolón-Canedo and A. Alonso-Betanzos, "Ensembles for feature selection: A review and future trends," Information Fusion, vol. 52, pp. 1-12, 2019
[63] B. Chandra and R. K. Sharma, "Exploring autoencoders for unsupervised feature selection," Proceedings of 2015 International Joint Conference on Neural Networks (IJCNN), 2015
[64] P. M. Chelvan and K. Perumal, "A comparative analysis of feature selection stability measures," Proceedings of 2017 International Conference on Trends in Electronics and Informatics (ICEI), 2017
[65] S. Alelyani, Z. Zhao, and H. Liu, "A dilemma in assessing stability of feature selection algorithms," Proceedings of 2011 IEEE International Conference on High Performance Computing and Communications, 2011
[66] O. Sagi and L. Rokach, "Ensemble learning: A survey," Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 8, no. 4, p. e1249, 2018
[67] M. Blachnik, "Ensembles of instance selection methods: A comparative study," International Journal of Applied Mathematics and Computer Science, vol. 29, no. 1, pp. 151-168, 2019
[68] R. Clarke et al., "The properties of high-dimensional data spaces: implications for exploring gene and protein expression data," Nature Reviews Cancer, vol. 8, no. 1, pp. 37-49, 2008
[69] D. A. Pisner and D. M. Schnyer, "Support vector machine," in Machine learning: Elsevier, 2020
[70] Y. Piao and K. H. Ryu, "A hybrid feature selection method based on symmetrical uncertainty and support vector machine for high-dimensional data classification," Proceedings of Intelligent Information and Database Systems: 9th Asian Conference, ACIIDS 2017, Kanazawa, Japan, April 3-5, 2017, Part I 9, 2017
[71] A. Osareh and B. Shadgar, "Microarray data analysis for cancer classification," Proceedings of 2010 5th International Symposium on Health Informatics and Bioinformatics, 2010
[72] M. Alirezanejad, R. Enayatifar, H. Motameni, and H. Nematzadeh, "Heuristic filter feature selection methods for medical datasets," Genomics, vol. 112, no. 2, pp. 1173-1181, 2020
[73] S. Huang, N. Cai, P. P. Pacheco, S. Narrandes, Y. Wang, and W. Xu, "Applications of support vector machine (SVM) learning in cancer genomics," Cancer Genomics & Proteomics, vol. 15, no. 1, pp. 41-51, 2018
[74] I. Guyon, S. Gunn, A. Ben-Hur, and G. Dror, "Result analysis of the nips 2003 feature selection challenge," Advances in Neural Information Processing Systems, vol. 17, 2004
[75] C. O. Sakar et al., "A comparative analysis of speech signal processing algorithms for Parkinson’s disease classification and the use of the tunable Q-factor wavelet transform," Applied Soft Computing, vol. 74, pp. 255-263, 2019
[76] A. Tsanas, M. A. Little, C. Fox, and L. O. Ramig, "Objective automatic assessment of rehabilitative speech treatment in Parkinson′s disease," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 22, no. 1, pp. 181-190, 2013
[77] P. Mesejo et al., "Computer-aided classification of gastrointestinal lesions in regular colonoscopy," IEEE Transactions on Medical Imaging, vol. 35, no. 9, pp. 2051-2063, 2016
[78] T. R. Golub et al., "Molecular classification of cancer: class discovery and class prediction by gene expression monitoring," Science, vol. 286, no. 5439, pp. 531-537, 1999
[79] J. Han, J. Pei, and H. Tong, Data mining: concepts and techniques. Morgan kaufmann, 2022
[80] D. Singh and B. Singh, "Investigating the impact of data normalization on classification performance," Applied Soft Computing, vol. 97, p. 105524, 2020
[81] F. Pedregosa et al., "Scikit-learn: Machine learning in Python," the Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 2011
[82] J. Li et al., "Feature selection: A data perspective," ACM Computing Surveys (CSUR), vol. 50, no. 6, pp. 1-45, 2017
[83] A. Ben Brahim and M. Limam, "Ensemble feature selection for high dimensional data: a new method and a comparative study," Advances in Data Analysis and Classification, vol. 12, pp. 937-952, 2018
[84] V. L. Cao, M. Nicolau, and J. McDermott, "A hybrid autoencoder and density estimation model for anomaly detection," Proceedings of International Conference on Parallel Problem Solving from Nature, 2016
[85] B. Pes, "Ensemble feature selection for high-dimensional data: a stability analysis across multiple domains," Neural Computing and Applications, vol. 32, no. 10, pp. 5951-5973, 2020
[86] B. Seijo-Pardo, I. Porto-Díaz, V. Bolón-Canedo, and A. Alonso-Betanzos, "Ensemble feature selection: homogeneous and heterogeneous approaches," Knowledge-Based Systems, vol. 118, pp. 124-139, 2017
[87] H. Shan et al., "Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction," Nature Machine Intelligence, vol. 1, no. 6, pp. 269-276, 2019
[88] R. Wald, T. M. Khoshgoftaar, and D. Dittman, "Mean aggregation versus robust rank aggregation for ensemble gene selection," Proceedings of 2012 11th International Conference on Machine Learning and Applications, vol. 1, 2012
[89] R. Wald, T. M. Khoshgoftaar, D. Dittman, W. Awada, and A. Napolitano, "An extensive comparison of feature ranking aggregation techniques in bioinformatics," Proceedings of 2012 IEEE 13th International Conference on Information Reuse & Integration (IRI), 2012
指導教授 蘇坤良(Kuen-Liang Su) 審核日期 2023-7-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明