參考文獻 |
[1] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, “Classification and Regression Trees,” Wadsworth, California, USA, 1984.
[2] T. Calders and B. Goethals, “Mining All Non-Derivable Frequent Itemsets,” Proc.of 2002 European Conf. on Principles of Data Mining and Knowledge Discovery, pp. 74–85, 2002.
[3] G. Dong, X. Zhang, L. Wong, and J. Li, “CAEP: Classification by Aggregating Emerging Patterns,” DS’99 (LNCS1721), Japan, Dec.1999.
[4] J. Gehrke, V. Ganti, R. Ramakrishnan, and W-Y. Loh, “BOAT—Optimistic Decision Tree Construction,” Proceedings of the 1999 ACM SIGMOD international conference on Management of Data, pp. 169–180, 1999.
[5] J. Gehrke, R. Ramakrishnan, and V. Ganti, “RainForest—A Framework for Fast Decision Tree Construction of Large Datasets,” Data Mining and Knowledge Discovery, 4:2/3, pp. 127–162, 2000.
[6] D. Gunopulos, H. Mannila, R. Khardon, and H. Toivonen, “Data Mining, Hypergraph Transversals, and Machine Learning,” Proc. 1997 ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, pp. 209–216, 1997.
[7] J. Han, J. Wang, Y. Lu, and P. Tzvetkov, “Mining Top-K Frequent Closed Patterns Without Minimum Support,” Proc. of 2002 Int. Conf. on Data Mining, pp. 211–218, 2002.
[8] B. Liu, W. Hsu, Y. Ma, “Integrating Classification and Association Rule Mining,” Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining, pp. 80–86, 1998.
[9] B. Liu, W. Hsu, and Y. Ma, “Pruning and Summarizing the Discovered Associations,” KDD-99. 1999.
[10] B. Liu, M. Hu, and W. Hsu, “Multi-Level Organization and Summarization of the Discovered Rules,” Proc. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pp. 208-217, 2000.
[11] M. Mehta, R. Agrawal, and J. Rissanen, “SLIQ: A Fast Scalable Classifier for Data Mining,” Advances in Database Technology—Proceedings of the Fifth International Conference on Extending Database Technology, pp.18–32, 1996.
[12] N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal, “Discovering Frequent Closed Itemsets for Association Rules,” Proc. of 7th Int. Conf. on Database Theory, pp. 398–416, 1999.
[13] J. Pei, G. Dong, W. Zou, and J. Han, “On Computing Condensed Frequent Pattern Bases,” Proc. 2002 Int. Conf. on Data Mining, pp. 378–385, 2002.
[14] J. R. Quinlan, “Induction on Decision Trees,” Machine Learning, 1, pp. 81–106, 1986.
[15] J. R. Quinlan, “C4.5: Programs for Machine Learning,” Morgan Kaufmann Series in Machine Learning. Kluwer Academic Publishers, 1993
[16] J. R. Quinlan and R. M. Cameron-Jones, “Cameron-Jones. Foil: A midterm Report,” Proceedings of the 1993 European Conference on Machine Learning, pp. 3–20, 1993.
[17] R. Rastogi, and K. Shim, “PUBLIC: A Decision Tree Classifier That Integrates Building and Pruning,” VLDB’98, Proceedings of 24th International Conference on Very Large Data Bases, pp. 404–415, 1998.
[18] J. C. Shafer, R. Agrawal, and M. Mehta, “SPRINT: A Scalable Parallel Classifier for Data Mining” VLDB’96, Proceedings of 22nd International Conference on Very Large Data Bases, pp. 544–555, 1996.
[19] L. Wenmin, H. Jiawei, and P. Jian, “CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules,” ICDM 2001: 369-376. 2001.
[20] X. Yan, H. Cheng, J. Han, and D. Xin, “Summarizing Itemset Patterns: A Profile-Based Approach,” Proceedings of the 2005 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug 2005.
[21] C. Yang, U. Fayyad, and P. S. Bradley, “Efficient Discovery of Error-Tolerant Frequent Itemsets in High Dimensions,” Proc. Of 2001 ACM Int. Conf. on Knowledge Discovery in Databases, pp. 194–203, 2001.
[22] X. Yin and J. Han, “CPAR: Classification Based on Predictive Association Rules” Proceedings of the Third SIAM International Conference on Data Mining, pp. 208–217, 2003. |