參考文獻 |
1. Khan, S.S. and M.G. Madden, One-class classification: taxonomy of study and review of techniques. The Knowledge Engineering Review, 2014. 29(3): p. 345-374.
2. Puri, A. and M. Gupta, Review on Missing Value Imputation Techniques in Data Mining. IJSRCSEIT 2017. 2(7).
3. Olvera-López, J.A., et al., A review of instance selection methods. Artificial Intelligence Review, 2010. 34(2): p. 133-143.
4. Haixiang, G., et al., Learning from class-imbalanced data: Review of methods and applications. Expert Systems with Applications, 2017. 73: p. 220-239.
5. Hempstalk, K. and E. Frank, Discriminating Against New Classes: One-class versus Multi-class Classification, in AI 2008: Advances in Artificial Intelligence. 2008. p. 325-336.
6. Olvera-López, J.A., et al., A review of instance selection methods. 2010. 34(2): p. 133-143.
7. Tan, A.C., D. Gilbert, and Y. Deville, Multi-class protein fold classification using a new ensemble machine learning approach. Genome Informatics, 2003. 14: p. 206-217.
8. Abe, N., B. Zadrozny, and J. Langford. An iterative method for multi-class cost-sensitive learning. in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. 2004.
9. Zhou, Z.-H. and X.-Y. Liu, Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Transactions on knowledge and data engineering, 2005. 18(1): p. 63-77.
10. Chen, K., B.-L. Lu, and J.T. Kwok. Efficient classification of multi-label and imbalanced data using min-max modular classifiers. in The 2006 IEEE International Joint Conference on Neural Network Proceedings. 2006. IEEE.
11. Sun, Y., M.S. Kamel, and Y. Wang. Boosting for learning multiple classes with imbalanced class distribution. in Sixth International Conference on Data Mining (ICDM′06). 2006. IEEE.
12. Zhou, Z.H. and X.Y. Liu, On multi‐class cost‐sensitive learning. Computational Intelligence, 2010. 26(3): p. 232-257.
13. Haibo, H. and E.A. Garcia, Learning from Imbalanced Data. IEEE Transactions on Knowledge and Data Engineering, 2009. 21(9): p. 1263-1284.
14. Weiss, G.M., Mining with rarity: a unifying framework. ACM Sigkdd Explorations Newsletter, 2004. 6(1): p. 7-19.
15. Kotsiantis, S., D. Kanellopoulos, and P. Pintelas, Handling imbalanced datasets: A review. GESTS International Transactions on Computer Science and Engineering, 2006. Vol.30.
16. Chawla, N.V., et al., SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 2002. 16: p. 321-357.
17. Bekkar, M. and T.A. Alitouche, Imbalanced Data Learning Approaches Review. International Journal of Data Mining & Knowledge Management Process, 2013. 3(4): p. 15-33.
18. Japkowicz, N., Learning from Imbalanced Data Sets: A Comparison of Various Strategies, in AAAI. 2000.
19. Drummond, C. and R.C. Holte, C4.5, Class Imbalance, and Cost Sensitivity: Why Under-Sampling beats Over-Sampling, in Workshop on Learning from Imbalanced Datasets II
ICML. 2003: Washington DC.
20. Chawla, N.V., et al., SMOTE: Synthetic Minority Over-sampling Technique. Artificial Intelligence Research, 2002. 16: p. 321-257.
21. Wah, Y.B., et al., Handling imbalanced dataset using SVM and k-NN approach. 2016.
22. Khan, S.S. and M.G. Madden, A Survey of Recent Trends in One Class Classification. AICS 2009, 2010: p. 88–197.
23. Breunig, M.M., et al., LOF: Identifying Density-Based Local Outliers, in ACM SIGMOD 2000 2000.
24. Scholkopf, B., et al., Support Vector Method for Novelty Detection. Advances in Neural Information Processing Systems, 2000.
25. TAX, D.M.J. and R.P.W. DUIN, Support Vector Data Description. Machine Learning, 2004. 54: p. 45-66.
26. Liu, F.T., K.M. Ting, and Z.-H. Zhou, Isolation-based Anomaly Detection. ACM Transactions on Knowledge Discovery from Data, 2012. 5.
27. Shin, H.J., D.-H. Eom, and S.-S. Kim, One-class support vector machines—an application in machine fault detection and classification. Computers & Industrial Engineering, 2005. 48(2): p. 395-408.
28. Lin, W.-C. and C.-F. Tsai, Missing value imputation: a review and analysis of the literature (2006–2017). Artificial Intelligence Review, 2019.
29. Strike, K., K.E. Emam, and N. Madhavji, Software Cost Estimation with Incomplete Data. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2001. 27(10).
30. RAYMOND, M.R. and D.M. ROBERTS, A COMPARISON OF METHODS FOR TREATING INCOMPLETE DATA IN SELECTION RESEARCH. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1987.
31. Silva-Ramirez, E.L., et al., Missing value imputation on missing completely at random data using multilayer perceptrons. Neural Netw, 2011. 24(1): p. 121-9.
32. Pelckmans, K., et al., Handling missing values in support vector machine classifiers. Neural Netw, 2005. 18(5-6): p. 684-92.
33. Farhangfar, A., L. Kurgan, and J. Dy, Impact of imputation of missing values on classification error for discrete data. Pattern Recognition, 2008. 41(12): p. 3692-3705.
34. Acurna, E. and C. Rodriguez. The treatment of missing values and its effect in the classifier accuracy, classification, clustering, and data mining applications. in Proceedings of the Meeting of the International Federation of Classification Societies (IFCS). 2004.
35. Burgette, L.F. and J.P. Reiter, Multiple imputation for missing data via sequential regression trees. American journal of epidemiology, 2010. 172(9): p. 1070-1076.
36. Shah, A.D., et al., Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study. American journal of epidemiology, 2014. 179(6): p. 764-774.
37. Doove, L.L., S. Van Buuren, and E. Dusseldorp, Recursive partitioning for missing data imputation in the presence of interaction effects. Computational Statistics & Data Analysis, 2014. 72: p. 92-104.
38. Breiman, L., et al., Classification and regression trees. 1984: CRC press.
39. Wilson, D.R. and T.R. Martinez, Reduction Techniques for Instance-Based Learning Algorithms. Machine Learning, 2000. 38(3): p. 257-286.
40. Tsai, C.-F. and F.-Y. Chang, Combining instance selection for better missing value imputation. Journal of Systems and Software, 2016. 122: p. 63-71.
41. Cover, T. and P. Hart, Nearest neighbor pattern classification. IEEE transactions on information theory, 1967. 13(1): p. 21-27.
42. Wilson, D.L., Asymptotic properties of nearest neighbor rules using edited data. IEEE Transactions on Systems, Man, and Cybernetics, 1972(3): p. 408-421.
43. AHA, D.W., D. KIBLER, and M. ALBERT, Instance-Based Learning Algorithms. Machine Learning, 1991. 6: p. 37-66.
44. Tsai, C.-F., W. Eberle, and C.-Y. Chu, Genetic algorithms in feature and instance selection. Knowledge-Based Systems, 2013. 39: p. 240-247.
45. Woods, K.S., et al., Comparative evaluation of pattern recognition techniques for detection of microcalcifications in mammography, in State of The Art in Digital Mammographic Image Analysis. 1994, World Scientific. p. 213-231.
46. Wang, K. and S. Stolfo, One-class training for masquerade detection. 2003.
47. Devi, D., S.K. Biswas, and B. Purkayastha, Learning in presence of class imbalance and class overlapping by using one-class SVM and undersampling technique. Connection Science, 2019. 31(2): p. 105-142. |