||I Ching is the wisdom of Chineses ancient and contains extensice and profound logical system. There has deep implied meaning in I Ching. I Ching uses abstract symbols to present everything in the world. Bagua that created by Fu Xi is the basic of I Ching, and including Qián (☰), Duì (☱), Lí (☲), Zhèn (☳), Xùn (☴), Kǎn (☵), Gèn (☶) and Kūn (☷). The connotations of I Ching are Philosophy, Image and Number and the generalization of Bagua is the most important part of Image and Number. Image and Number will be obtained from practise divination, and collocate with things of generalization to acquire the answer of divination. Howerer, everyone has diverse explaination and different understanding of I Ching, and there did not have an effective method to specifically refer things to Bagua. Hence, this research uses supervised learning of machine learning to solve the problems of generalization of Bagua. Everything in the world have been grouped by seven subject including human, situation, body, disease, object and building, and feature attributes and property values of every subject have been found. This research utilized the feature attributes and property values to establish datasets of seven subjects. C4.5, k- nearest neighbors algorithm and support vector machine have been applied to produce different classifiers and compare the performances of classifiers. After comparing original datasets and resampling dataset, kNN presented the most efficient accuracy of classification among C4.5, kNN and SVM. This research provides an empirical origination of generalization of Bagua. The most efficient accuracy of object is closed to 90%. This result can improve the practicability of Bagua divination.|
﹝1﹞ Alpaydın, E. (2009). Introduction to Machine Learning: The MIT Press.
﹝2﹞ Berry, M. J., & Linoff, G. (1997). Data mining techniques: for marketing, sales, and customer support: John Wiley & Sons, Inc.
﹝3﹞ Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees: CRC press.
﹝4﹞ Burges, C. J. (1998). A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery, 2(2), 121-167.
﹝5﹞ Cover, T., & Hart, P. (1967). Nearest neighbor pattern classification. Information Theory, IEEE Transactions on, 13(1), 21-27.
﹝6﹞ Han, E.-H. S., & Karypis, G. (2000). Centroid-based document classification: Analysis and experimental results: Springer.
﹝7﹞ John, G. H., Kohavi, R., & Pfleger, K. (1994). Irrelevant features and the subset selection problem. Paper presented at the Machine Learning: Proceedings of the Eleventh International Conference.
﹝8﹞ Kass, G. V. (1980). An exploratory technique for investigating large quantities of categorical data. Applied statistics, 119-127.
﹝9﹞ Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. Paper presented at the Ijcai.
﹝10﹞ Kreßel, U. H.-G. (1999). Pairwise classification and support vector machines. Paper presented at the Advances in kernel methods.
﹝11﹞ Lim, T.-S., Loh, W.-Y., & Shih, Y.-S. (2000). A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Machine learning, 40(3), 203-228.
﹝12﹞ Maglogiannis, I. G. (2007). Emerging artificial intelligence applications in computer engineering: real word AI systems with applications in eHealth, HCI, information retrieval and pervasive technologies (Vol. 160): Ios Press.
﹝13﹞ Mohri, M., Rostamizadeh, A., & Talwalkar, A. (2012). Foundations of machine learning: MIT press.
﹝14﹞ Murthy, S. K. (1998). Automatic construction of decision trees from data: A multi-disciplinary survey. Data mining and knowledge discovery, 2(4), 345-389.
﹝15﹞ Özçift, A. (2011). Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis. Computers in Biology and Medicine, 41(5), 265-271.
﹝16﹞ Qi, Y., Hauptmann, A., & Liu, T. (2003). Supervised classification for video shot segmentation. Paper presented at the Multimedia and Expo, 2003. ICME′03. Proceedings. 2003 International Conference on.
﹝17﹞ Quinlan, J. R. (1986). Induction of decision trees. Machine learning, 1(1), 81-106.
﹝18﹞ Quinlan, J. R. (1993). C4. 5: programs for machine learning: Elsevier.
﹝19﹞ Shami, M., & Verhelst, W. (2007). An evaluation of the robustness of existing supervised machine learning approaches to the classification of emotions in speech. Speech Communication, 49(3), 201-212.
﹝20﹞ Tsoumakas, G., & Katakis, I. (2006). Multi-label classification: An overview. Dept. of Informatics, Aristotle University of Thessaloniki, Greece.
﹝21﹞ Witten, I. H., & Frank, E. (2005). Data Mining: Practical machine learning tools and techniques: Morgan Kaufmann.
﹝1﹞ 孔繁詩. (1997). 易經說卦傳記卦變研究. 台北市: 晴園印刷.
﹝2﹞ 行政院主計處. (2011). 中華民國行業標準分類: 行政院主計處.
﹝3﹞ 余敦康. (2005). 易學今昔. 大陸: 廣西師範大學出版社.
﹝4﹞ 周春才. (2011). 漫畫易經. 台灣: 晶冠出版社.
﹝5﹞ 明福居士, & 信翰居士. (2011). 巧遇梅花易數. 台灣: 瑞成書局.
傅佩榮. (2010). 易想天開看人生. 台灣: 時報出版.