參考文獻 |
Agudo-Peregrina, Á. F., Iglesias-Pradas, S., Conde-González, M. Á., & Hernández-Garcı́a, Á. (2014). Can we predict success from log data in vles? classification of interactions for learning analytics and their relation with performance in vle-supported f2f and online learning. Computers in human behavior, 31, 542–550.
Akçapınar, G., Hasnine, M. N., Majumdar, R., Flanagan, B., & Ogata, H. (2019). Developing an early-warning system for spotting at-risk students by using ebook interaction logs. Smart Learning Environments, 6(1), 4.
Albán, M., & Mauricio, D. (2018). Decision trees for the early identification of university students at risk of desertion. International Journal of Engineering & Technology, 7(4.44), 51–54.
Alston, G. L., Lane, D., & Wright, N. J. (2014). The methodology for the early identification of students at risk for failure in a professional degree program. Currents in Pharmacy Teaching and Learning, 6(6), 798–806.
Arroway, P., Morgan, G., O’Keefe, M., & Yanosky, R. (2016). Learning analytics in higher ed- ucation. EDUCAUSE, available at: https://library. educause. edu/~/media/files/library/ 2016/2/ers1504la. pdf (accessed 28 February 2017).[Google Scholar].
Asif, R., Merceron, A., & Pathan, M. K. (2014). Predicting student academic performance at degree level: A case study. International Journal of Intelligent Systems and Applications, 7(1), 49.
Beatty, I. D. (2013). Standards-based grading in introductory university physics. Journal of the Scholarship of Teaching and Learning, 1–22.
Bellman, R. E. (2015). Adaptive control processes: A guided tour. Princeton university press. Bhuyan, M. H., Khan, S. S. A., & Rahman, M. Z. (2014). Teaching analog electronics course for electrical engineering students in cognitive domain. Journal of Electrical Engineering,
the Institute of Engineers Bangladesh (IEB-EE), 40(1-2), 52–58.
Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A training algorithm for optimal margin
classifiers. In Proceedings of the fifth annual workshop on computational learning theory
(pp. 144–152). ACM.
Buda, M., Maki, A., & Mazurowski, M. A. (2018). A systematic study of the class imbalance
problem in convolutional neural networks. Neural Networks, 106, 249–259.
Caragiannis, I., Krimpas, G. A., & Voudouris, A. A. (2016). How effective can simple ordinal peer grading be? In Proceedings of the 2016 acm conference on economics and computation
(pp. 323–340). ACM.
55
REFERENCES
Çevik, Y. D. (2015). Predicting college students’online information searching strategies based on epistemological, motivational, decision-related, and demographic variables. Computers & Education, 90, 54–63.
Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). Smote: Synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321–357. Chawla, N. V., Japkowicz, N., & Kotcz, A. (2004). Special issue on learning from imbalanced
data sets. ACM Sigkdd Explorations Newsletter, 6(1), 1–6.
Chen, W., Brinton, C. G., Cao, D., Mason-singh, A., Lu, C., & Chiang, M. (2018). Early detec-
tion prediction of learning outcomes in online short-courses via learning behaviors. IEEE
Transactions on Learning Technologies.
Choi, S. P., Lam, S. S., Li, K. C., & Wong, B. T. (2018). Learning analytics at low cost: At-risk
student prediction with clicker data and systematic proactive interventions. Journal of
Educational Technology & Society, 21(2), 273–290.
Chui, K. T., Fung, D. C. L., Lytras, M. D., & Lam, T. M. (2018). Predicting at-risk university
students in a virtual learning environment via a machine learning algorithm. Computers
in Human Behavior.
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273–297. Davis, J., & Goadrich, M. (2006). The relationship between precision-recall and roc curves. In
Proceedings of the 23rd international conference on machine learning (pp. 233–240). ACM. Devijver, P. A., & Kittler, J. (1982). Pattern recognition: A statistical approach. Prentice hall. Elikai, F., & Schuhmann, P. W. (2010). An examination of the impact of grading policies on
students’achievement. Issues in Accounting Education, 25(4), 677–693.
Estabrooks, A., Jo, T., & Japkowicz, N. (2004). A multiple resampling method for learning from
imbalanced data sets. Computational intelligence, 20(1), 18–36.
Fan, J., & Li, R. (2006). Statistical challenges with high dimensionality: Feature selection in
knowledge discovery. arXiv preprint math/0602133.
Fawcett, T. (2004). Roc graphs: Notes and practical considerations for researchers. Machine
learning, 31(1), 1–38.
Golub, G. H., Heath, M., & Wahba, G. (1979). Generalized cross-validation as a method for
choosing a good ridge parameter. Technometrics, 21(2), 215–223.
Green, S. B. (1991). How many subjects does it take to do a regression analysis. Multivariate
behavioral research, 26(3), 499–510.
Gui, C. (2017). Analysis of imbalanced data set problem: The case of churn prediction for
telecommunication. Artif. Intell. Research, 6(2), 93.
Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of
machine learning research, 3(Mar), 1157–1182.
Guyon, I., Gunn, S., Nikravesh, M., & Zadeh, L. A. (2008). Feature extraction: Foundations and
applications. Springer.
Hachey, A. C., Wladis, C. W., & Conway, K. M. (2014). Do prior online course outcomes
provide more information than gpa alone in predicting subsequent online course grades and retention? an observational study at an urban community college. Computers & Education, 72, 59–67.
56
He, H., Bai, Y., Garcia, E. A., & Li, S. (2008). Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In 2008 ieee international joint conference on neural networks (ieee world congress on computational intelligence) (pp. 1322–1328). IEEE.
Hira, Z. M., & Gillies, D. F. (2015). A review of feature selection and feature extraction methods applied on microarray data. Advances in bioinformatics, 2015.
Hu, Y.-H., Lo, C.-L., & Shih, S.-P. (2014). Developing early warning systems to predict students’ online learning performance. Computers in Human Behavior, 36, 469–478.
Huang, S., & Fang, N. (2013). Predicting student academic performance in an engineering dy- namics course: A comparison of four types of predictive mathematical models. Computers & Education, 61, 133–145.
Hwang, G.-J., Chu, H.-C., & Yin, C. (2017). Objectives, methodologies and research issues of learning analytics. Taylor & Francis.
Jain, A., & Zongker, D. (1997). Feature selection: Evaluation, application, and small sample performance. IEEE transactions on pattern analysis and machine intelligence, 19(2), 153– 158.
Janecek, A., Gansterer, W., Demel, M., & Ecker, G. (2008). On the relationship between feature selection and classification accuracy. In New challenges for feature selection in data mining and knowledge discovery (pp. 90–105).
Jenke, R., Peer, A., & Buss, M. (2014). Feature extraction and selection for emotion recognition from eeg. IEEE Transactions on Affective Computing, 5(3), 327–339.
Johnson, L., Becker, S. A., Cummins, M., Estrada, V., Freeman, A., & Hall, C. (2016). Nmc horizon report: 2016 higher education edition. The New Media Consortium.
Johnson, L. F., & Witchey, H. (2011). The 2010 horizon report: Museum edition. Curator: The Museum Journal, 54(1), 37–40.
Jolliffe, I. T. (1982). A note on the use of principal components in regression. Journal of the Royal Statistical Society: Series C (Applied Statistics), 31(3), 300–303.
Kamal, P., & Ahuja, S. (2019). An ensemble-based model for prediction of academic perfor- mance of students in undergrad professional course. Journal of Engineering, Design and Technology.
Khalid, S., Khalil, T., & Nasreen, S. (2014). A survey of feature selection and feature extraction techniques in machine learning. In 2014 science and information conference (pp. 372–378). IEEE.
Kodinariya, T. M., & Makwana, P. R. (2013). Review on determining number of cluster in k-means clustering. International Journal, 1(6), 90–95.
Kulick, G., & Wright, R. (2008). The impact of grading on the curve: A simulation analysis. International Journal for the Scholarship of Teaching and Learning, 2(2), n2.
Kuzilek, J., Hlosta, M., Herrmannova, D., Zdrahal, Z., & Wolff, A. (2015). Ou analyse: Analysing at-risk students at the open university. Learning Analytics Review, 1–16.
Lara, J. A., Lizcano, D., Martı́nez, M. A., Pazos, J., & Riera, T. (2014). A system for knowl- edge discovery in e-learning environments within the european higher education area– application to student data from open university of madrid, udima. Computers & Educa- tion, 72, 23–36.
57
REFERENCES
Liu, S., Wang, Y., Zhang, J., Chen, C., & Xiang, Y. (2017). Addressing the class imbalance problem in twitter spam detection using ensemble learning. Computers & Security, 69, 35–49.
Macfadyen, L. P., & Dawson, S. (2010). Mining lms data to develop an“early warning system” for educators: A proof of concept. Computers & education, 54(2), 588–599.
MacQueen, J. et al. (1967). Some methods for classification and analysis of multivariate ob- servations. In Proceedings of the fifth berkeley symposium on mathematical statistics and probability (Vol. 1, 14, pp. 281–297). Oakland, CA, USA.
Mani, I., & Zhang, I. (2003). Knn approach to unbalanced data distributions: A case study involving information extraction. In Proceedings of workshop on learning from imbalanced datasets (Vol. 126).
Márquez-Vera, C., Cano, A., Romero, C., Noaman, A. Y. M., Mousa Fardoun, H., & Ventura, S. (2016). Early dropout prediction using data mining: A case study with high school students. Expert Systems, 33(1), 107–124.
Meier, Y., Xu, J., Atan, O., & Van der Schaar, M. (2016). Predicting grades. IEEE Transactions on Signal Processing, 64(4), 959–972.
Melli, G., Zaı̈ane, O. R., & Kitts, B. (2006). Introduction to the special issue on successful real-world data mining applications. SIGKDD Explorations, 8(1), 1–2.
Millard, J. P. (2016). How can instructional staff be effectively introduced to a standards-based grading policy.
Motoda, H., & Liu, H. (2002). Feature selection, extraction and construction. Communication of IICM (Institute of Information and Computing Machinery, Taiwan) Vol, 5(67-72), 2.
Nam, S., Frishkoff, G., & Collins-Thompson, K. (2018). Predicting students’disengaged behaviors in an online meaning-generation task. IEEE Transactions on Learning Technologies, 11(3), 362–375.
Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10), 1345–1359.
Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Journal of Educational Technology & Society, 17(4), 49–64.
Pearson, K. (1901). Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11), 559–572.
Peng, C.-C. (2017). Grading on a curve in prerequisite courses and student performance in online introductory corporate finance classes. Journal of Higher Education Theory & Practice, 17 (9).
Raman, K., & Joachims, T. (2015). Bayesian ordinal peer grading. In Proceedings of the second (2015) acm conference on learning@ scale (pp. 149–156). ACM.
Romero, C., López, M.-I., Luna, J.-M., & Ventura, S. (2013). Predicting students’ final per- formance from participation in on-line discussion forums. Computers & Education, 68, 458–472.
Sadler, P. M., & Good, E. (2006). The impact of self-and peer-grading on student learning. Educational assessment, 11(1), 1–31.
58
Thammasiri, D., Delen, D., Meesad, P., & Kasap, N. (2014). A critical assessment of imbalanced class distribution problem: The case of predicting freshmen student attrition. Expert Sys- tems with Applications, 41(2), 321–330.
Thornton, C., Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2013). Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th acm sigkdd international conference on knowledge discovery and data mining (pp. 847–855). ACM.
Van Leeuwen, A., Janssen, J., Erkens, G., & Brekelmans, M. (2013). Teacher interventions in a synchronous, co-located cscl setting: Analyzing focus, means, and temporality. Computers in Human Behavior, 29(4), 1377–1386.
Villagrá-Arnedo, C.-J., Gallego-Durán, F. J., Compañ, P., Llorens Largo, F., Molina-Carmona, R., et al. (2016). Predicting academic performance from behavioural and learning data.
Walstad, W. B., & Miller, L. A. (2016). What’s in a grade? grading policies and practices in principles of economics. The Journal of Economic Education, 47(4), 338–350.
Wedell, D. H., Parducci, A., & Roman, D. (1989). Student perceptions of fair grading: A range- frequency analysis. American Journal of Psychology, 102(2), 233–248.
Xing, W., Chen, X., Stein, J., & Marcinkowski, M. (2016). Temporal predication of dropouts in moocs: Reaching the low hanging fruit through stacking generalization. Computers in human behavior, 58, 119–129.
Xing, W., & Du, D. (2018). Dropout prediction in moocs: Using deep learning for personalized intervention. Journal of Educational Computing Research, 0735633118757015.
Yang, S. J., Huang, J. C., & Huang, A. Y. (2017). Moocs in taiwan: The movement and expe- riences. In Open education: From oers to moocs (pp. 101–116). Springer. |