參考文獻 |
Adankon, M. M., & Cheriet, M. (2009). Model selection for the LS-SVM. Application to handwriting recognition. Pattern Recognition, 42(12), 3264-3270.
Agarwal, B., & Mittal, N. (2014). Text classification using machine learning methods-a survey. In Proceedings of the Second International Conference on Soft Computing for Problem Solving (SocProS 2012), December 28-30, 2012 (pp. 701-709). Springer, New Delhi.
Alickovic, E., & Subasi, A. (2016). Medical decision support system for diagnosis of heart arrhythmia using DWT and random forests classifier. Journal of medical systems, 40(4), 1.
Amari, S. I., & Wu, S. (1999). Improving support vector machine classifiers by modifying kernel functions. Neural Networks, 12(6), 783-789.
Atenas, J., & Havemann, L. (2013). Quality assurance in the open: an evaluation of OER repositories. INNOQUAL-International Journal for Innovation and Quality in Learning, 1(2), 22-34.
Atenas, J., & Havemann, L. (2014). Questions of quality in repositories of open educational resources: a literature review. Research in Learning Technology, 22(1), 20889.
Biletskiy, Y., Wojcenovic, M., & Baghi, H. (2009). Focused crawling for downloading learning objects–an architectural perspective. Interdisciplinary Journal of E-Learning and Learning Objects, 5, 169-180.
Bissell, A. N. (2009). Permission granted: open licensing for educational resources. Open Learning, 24(1), 97-106.
Bosch, A., Zisserman, A., & Munoz, X. (2007, October). Image classification using random forests and ferns. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on (pp. 1-8). IEEE.
Boznar, M., Lesjak, M., & Mlakar, P. (1993). A neural network-based method for short-term predictions of ambient SO2 concentrations in highly polluted industrial areas of complex terrain. Atmospheric Environment. Part B. Urban Atmosphere, 27(2), 221-230.
Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
Buckinx, W., & Van den Poel, D. (2005). Customer base analysis: partial defection of behaviourally loyal clients in a non-contractual FMCG retail setting. European Journal of Operational Research, 164(1), 252-268.
Caswell, T., Henson, S., Jensen, M., & Wiley, D. (2008). Open content and open educational resources: Enabling universal education. The International Review of Research in Open and Distributed Learning, 9(1).
Caswell, T., Henson, S., Jensen, M., & Wiley, D. Open educational resources: Enabling universal education (2008). International Review of Research in Open and Distance Learning, 9(1).
Chen, K., Wang, L., & Chi, H. (1997). Methods of combining multiple classifiers with different features and their applications to text-independent speaker identification. International Journal of Pattern Recognition and Artificial Intelligence, 11(03), 417-445.
Chumerin, N., & Van Hulle, M. M. (2006, September). Comparison of two feature extraction methods based on maximization of mutual information. In Machine Learning for Signal Processing, 2006. Proceedings of the 2006 16th IEEE Signal Processing Society Workshop on (pp. 343-348). IEEE.
Clemen, R. T. (1989). Combining forecasts: A review and annotated bibliography. International journal of forecasting, 5(4), 559-583.
Clements, K. I., & Pawlowski, J. M. (2012). User‐oriented quality for OER: Understanding teachers′ views on re‐use, quality, and trust. Journal of Computer Assisted Learning, 28(1), 4-14.
Col, U. N. E. S. C. O. (2011). Guidelines for open educational resources (OER) in higher education.
Cutler, D. R., Edwards, T. C., Beard, K. H., Cutler, A., Hess, K. T., Gibson, J., & Lawler, J. J. (2007). Random forests for classification in ecology. Ecology, 88(11), 2783-2792.
D’Antoni, S. (2009). Open educational resources: Reviewing initiatives and issues.
Dreiseitl, S., & Ohno-Machado, L. (2002). Logistic regression and artificial neural network classification models: a methodology review. Journal of biomedical informatics, 35(5), 352-359.
Gardner, J. W., Craven, M., Dow, C., & Hines, E. L. (1998). The prediction of bacteria type and culture growth phase by an electronic nose with a multi-layer perceptron network. Measurement Science and Technology, 9(1), 120.
Gardner, M. W., & Dorling, S. R. (1998). Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmospheric environment, 32(14), 2627-2636.
Jin, X., Zhao, M., Chow, T. W., & Pecht, M. (2014). Motor bearing fault diagnosis using trace ratio linear discriminant analysis. IEEE Transactions on Industrial Electronics, 61(5), 2441-2451.
Johnstone, S. M. (2005). Open educational resources serve the world. Educause Quarterly, 28(3), 15.
Lee, S. (2005). Application of logistic regression model and its validation for landslide susceptibility mapping using GIS and remote sensing data. International Journal of Remote Sensing, 26(7), 1477-1491.
Liaw, A., & Wiener, M. (2002). Classification and regression by randomForest. R news, 2(3), 18-22.
Liu, C., & Wechsler, H. (2002). Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Transactions on Image processing, 11(4), 467-476.
Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., & Arnaldi, B. (2007). A review of classification algorithms for EEG-based brain–computer interfaces. Journal of neural engineering, 4(2), R1.
Izenman, A. J. (2013). Linear discriminant analysis. In Modern multivariate statistical techniques (pp. 237-280). Springer New York.
Khalid, S., Khalil, T., & Nasreen, S. (2014, August). A survey of feature selection and feature extraction techniques in machine learning. In Science and Information Conference (SAI), 2014 (pp. 372-378). IEEE.
Khan, N. M., Ksantini, R., Ahmad, I. S., & Boufama, B. (2012). A novel SVM+ NDA model for classification with an application to face recognition. Pattern Recognition, 45(1), 66-79.
Kim, S. Y., Jung, T. S., Suh, E. H., & Hwang, H. S. (2006). Customer segmentation and strategy development based on customer lifetime value: A case study. Expert systems with applications, 31(1), 101-107.
Kohavi, R. (1995, August). A study of cross-validation and bootstrap for accuracy estimation and model selection. In Ijcai (Vol. 14, No. 2, pp. 1137-1145).
Manek, A. S., Shenoy, P. D., Mohan, M. C., & Venugopal, K. R. (2017). Aspect term extraction for sentiment analysis in large movie reviews using Gini Index feature selection method and SVM classifier. World wide web, 20(2), 135-154.
Mehra, N., & Gupta, S. (2013). Survey on multiclass classification methods.
Morgan, N., & Bourlard, H. (1990, April). Continuous speech recognition using multilayer perceptrons with hidden Markov models. In Acoustics, Speech, and Signal Processing, 1990. ICASSP-90., 1990 International Conference on (pp. 413-416). IEEE.
Motoda, H., & Liu, H. (2002). Feature selection, extraction and construction. Communication of IICM (Institute of Information and Computing Machinery, Taiwan) Vol, 5, 67-72.
Nijhuis, J. A. G., Ter Brugge, M. H., Helmholt, K. A., Pluim, J. P. W., Spaanenburg, L., Venema, R. S., & Westenberg, M. A. (1995, November). Car license plate recognition with neural networks and fuzzy logic. In Neural Networks, 1995. Proceedings., IEEE International Conference on (Vol. 5, pp. 2232-2236). IEEE.
Pawlowski, J. M., & Bick, M. (2012). Open educational resources. Business & Information Systems Engineering, 4(4), 209-212.
Qi, Z., Tian, Y., & Shi, Y. (2013). Robust twin support vector machine for pattern classification. Pattern Recognition, 46(1), 305-316.
Ruiz-Calleja, A., Vega-Gorgojo, G., Asensio-Pérez, J. I., Bote-Lorenzo, M. L., Gómez-Sánchez, E., & Alario-Hoyos, C. (2012). A Linked Data approach for the discovery of educational ICT tools in the Web of Data. Computers & Education, 59(3), 952-962.
Samant, A., & Adeli, H. (2000). Feature extraction for traffic incident detection using wavelet transform and linear discriminant analysis. Computer‐Aided Civil and Infrastructure Engineering, 15(4), 241-250.
Sanjay, G. (2016). A Comparative Study on Face Recognition using Subspace Analysis. In International Conference on Computer Science and Technology Allies in Research-March (p. 82).
Scholkopf, B., Sung, K. K., Burges, C. J., Girosi, F., Niyogi, P., Poggio, T., & Vapnik, V. (1997). Comparing support vector machines with Gaussian kernels to radial basis function classifiers. IEEE transactions on Signal Processing, 45(11), 2758-2765.
Sebastiani, F. (2002). Machine learning in automated text categorization. ACM computing surveys (CSUR), 34(1), 1-47
Statnikov, A., Wang, L., & Aliferis, C. F. (2008). A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC bioinformatics, 9(1), 319.
Subasi, A., Alickovic, E., & Kevric, J. (2017). Diagnosis of Chronic Kidney Disease by Using Random Forest. In CMBEBIH 2017 (pp. 589-594). Springer, Singapore.
Subasi, A., & Gursoy, M. I. (2010). EEG signal classification using PCA, ICA, LDA and support vector machines. Expert Systems with Applications, 37(12), 8659-8666.
Tong, S., & Koller, D. (2001). Support vector machine active learning with applications to text classification. Journal of machine learning research, 2(Nov), 45-66.
UNESCO., I. F. (2002). Forum on the impact of open courseware for higher education in developing countries.
Uysal, A. K., & Gunal, S. (2014). The impact of preprocessing on text classification. Information Processing & Management, 50(1), 104-112.
Vapnik, V. N. (1998). Statistical learning theory. Adaptive and learning systems for signal processing, communications, and control.
Verbert, K., Ochoa, X., Derntl, M., Wolpers, M., Pardo, A., & Duval, E. (2012). Semi-automatic assembly of learning resources. Computers & Education, 59(4), 1257-1272.
Xanthopoulos, P., Pardalos, P. M., & Trafalis, T. B. (2013). Linear discriminant analysis. In Robust Data Mining (pp. 27-33). Springer New York. |