參考文獻 |
[1] R. Karchin, K. Karplus, and D. Haussler, “Classifying G-protein coupled receptors with
support vector machines,” Bioinformatics, vol. 18, no. 1, pp. 147-159, Jan. 2002.
[2] C. S. Leslie, E. Eskin, A. Cohen, J. Weston, and W. S. Noble, “Mismatch string kernels for
discriminative protein classification,” Bioinformatics, vol. 20, no. 4, pp. 467-476, Jan. 2004.
[3] K. Tsuda, H. Shin, and B. Scholkopf, “Fast protein classification with multiple networks,”
Bioinformatics, vol. 21, no. 2, pp. ii59-ii65, Sep. 2005.
[4] C. P. Lee, W. S. Lin, Y. M. Chen, and B.J. Kuo, “Gene selection and sample classification on
microarray data based on adaptive genetic algorithm/k-nearest neighbor method,” Expert
Systems with Applications, vol. 38, no. 5, pp. 4661–4667,May 2011.
[5] K. Moorthy, M. S. Bin Mohamad, and S. Deris, “Multiple gene sets for cancer classification
using gene range selection based on random forest,” Lecture Notes in Computer Science, vol.
7802, pp. 385-393, Mar. 2013.
[6] G. Shuster et al., “Classification of breast cancer precursors through exhaled breath,”
Breast Cancer Research and Treatment, vol. 126, no. 3, pp. 791-796, 2011.
[7] J. Zhenga and B. L. Lua, “A support vector machine classifier with automatic confidence
and its application to gender classification,” Neurocomputing, vol. 74, pp. 1926–1935, May
2011.
[8] V. K. Anagnostou et al., “Molecular classification of nonsmall cell lung cancer using a
4-protein quantitative assay,” Cancer, vol. 118, no. 6, pp. 1607–1618, Mar. 2012.
[9] C. Huang, D. Yang, and Y. Chuang, “Application of wrapper approach and composite
classifier to the stock trend prediction,” Expert Systems with Applications, vol. 34, no. 4, pp.
2870-2878, May 2008.
[10] K. Assaleh, H. El-Baz, and S. Al-Salkhadi, “Predicting stock prices using polynomial
classifiers: the case of Dubai financial market,” Journal of Intelligent Learning Systems and
Applications, vol. 3, no. 2A, pp. 82-89, May 2011.
[11] Y. Son, D. J. Noh, and J. Lee, “Forecasting trends of high-frequency KOSPI200 index data
using learning classifiers,” Expert Systems with Applications, vol. 39, no. 14, pp. 11607–
11615, Oct. 2012.
[12] Cortes, Corinna, and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no.
3, pp. 273-297, Sep. 1995.
[13] V. N. Vapnik, The nature of statistical learning theory: Springer-Verlag New York, Inc.,
1995.
[14] J. R. Quinlan, “Induction of Decision Trees,” Machine Learning, vol. 1, no. 1, pp. 81-106,
1986.
[15] I. Rish, “An empirical study of the naive Bayes classifier,” IJCAI 2001 Workshop on
Empirical Methods in Artificial Intelligence, vol. 3, no. 22, pp. 41-46, 2001.
[16] Cover, Thomas and P. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on
Information Theory, vol. 13, no. 1, pp. 21-27, Jan. 1967.
[17] L. Jiang, Z. Cai, D. Wang, and S. Jiang, “Survey of improving k-nearest-neighbor for
classification,” Proceedings of the 4h International Conference on Fuzzy Systems and
Knowledge Discovery, vol. 1, pp. 679-683, Aug. 2007.
[18] Lior Rokach, “Ensemble-based classifiers,” Artificial Intelligence Review, vol. 33, no. 1-2,
pp. 1-39, Feb. 2010.
[19] M.-L. Huang, H.-Y. Chen, and J.-J. Huang, “Glaucoma detection using adaptive neuro-fuzzy
inference system,” Expert Systems with Applications, vol. 32, no. 2, pp. 458-468, Feb. 2007.
[20] A. Das and M. Bhattacharya, “GA based neuro fuzzy techniques for breast cancer
identification,” Proceedings of the 2008 International Machine Vision and Image Processing
Conference, pp. 136-141, Sep. 2008.
[21] A. Das and M. Bhattacharya, “A study on prognosis of brain tumors using fuzzy logic and
genetic algorithm based techniques,” Proceedings of the 2009 International Joint Conference
on Bioinformatics, Systems Biology and Intelligent Computing, pp. 348-351, Aug. 2009.
[22] L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8, no. 3, pp. 338-353, Jun. 1965.
[23] D. Ramot, R. Milo, M. Friedman, and A. Kandel, “Complex fuzzy sets,” IEEE Transactions
on Fuzzy Systems, vol. 10, no. 2, pp. 171-186, Apr. 2002.
[24] D. Ramot, M. Friedman, G. Langholz, and A. Kandel, “Complex fuzzy logic,” IEEE
Transactions on Fuzzy Systems, vol. 11, no. 4, pp. 450-461, Aug. 2003.
[25] C. Li and T. Chiang, “Function approximation with complex neuro-fuzzy system using
complex fuzzy sets – a new approach,” New Generation Computing, vol. 29, no. 3, pp.
261-276, Jul. 2011.
[26] C. Li and T. W. Chiang, “Complex neuro-fuzzy ARIMA forecasting — a new approach
using complex fuzzy sets,” IEEE Transactions on Fuzzy Systems, vol. 21, no. 3, pp. 567-584,
Jun. 2013.
[27] C. Li and T. W. Chiang, “Complex fuzzy computing to time series prediction—a
multi-swarm PSO learning approach,” Lecture Notes in Computer Science, vol. 6592, pp.
242-251, Apr. 2011.
[28] C. Li and T. W. Chiang, “Complex fuzzy model with PSO-RLSE hybrid learning approach
to function approximation,” International Journal of Intelligent Information and Database
Systems, vol. 5, no. 4, pp. 409-430, Jul. 2011.
[29] C. Li and F. Chan, “Complex-Fuzzy Adaptive Image Restoration – An
Artificial-Bee-Colony-Based Learning Approach,” Lecture Notes in Computer Science, vol.
6592, pp. 90-99, Apr. 2011.
[30] G. Ou and Y.L. Murphey, “Multi-class pattern classification using neural networks,” Pattern
Recognition, vol. 40, no. 1, pp. 4-18, Jan. 2007.
[31] T. C. Chen and H. L. Tsao, “Using a hybrid meta-evolutionary rule mining approach as
a classification response model,” Expert Systems with Applications, vol. 36, no. 2, pt. 1,
pp. 1999-2007, Mar. 2009.
[32] Y. W. Chen, and C. J. Lin, “Combining SVMs with Various Feature Selection Strategies,”
Studies in Fuzziness and Soft Computing, vol. 207, pp. 315-324, 2006.
[33] M. Clerc, A method to improve Standard PSO, 2009. [Online]. Available:
http://clerc.maurice.free.fr/pso/Design_efficient_PSO.pdf [Accessed: Jul. 28, 2012].
[34] M. Clerc, Standard Particle Swarm Optimisation, Sep. 2012. [Online]. Available:
http://clerc.maurice.free.fr/pso/SPSO_descriptions.pdf [Accessed: Mar. 26, 2013].
[35] J. S. R. Jang, C. T. Sun, and E. Mizutani, “Least-squares methods for system identification,”
Neuro-Fuzzy and Soft Computing, NJ: Prentice Hall, 1997, pp. 95-125.
[36] S. Dick, “Toward complex fuzzy logic,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 3,
pp. 405-414, Jun. 2005.
[37] E. Mamdani, and S. Assilian, “An experiment in linguistic synthesis with a fuzzy logic
controller,” International Journal of Man-Machine Studies, vol. 7, no. 1, pp. 1-13, Jan. 1975.
[38] T. Takagi and M. Sugeno, “Fuzzy identification of systems and its applications to modeling
and control,” IEEE Transactions on Systems, Man and Cybernetics, vol. 15, no. 1, pp.
116-132, Jan. 1985.
[39] J. S. R. Jang, “ANFIS: Adaptive-network-based fuzzy inference system,” IEEE Transactions
on Systems, Man and Cybernetics, vol. 23, no. 3, pp. 665-685, May 1993.
[40] D. Nauck and R. Kruse, “A neuro-fuzzy method to learn fuzzy classification rules from data,”
Fuzzy Sets and Systems, vol. 89, no. 3, pp. 277-288, Aug. 1997.
[41] D. Nauck, A. Nurnberger, and R. Kruse, “Neuro-fuzzy classification,” Proceedings of the 6th
Conference of the International Federation of Classification Societies, pp. 287-294, Jul.
1998.
[42] D. Nauck and R. Kruse, “Obtaining interpretable fuzzy classification rules from medical
data,” Artificial Intelligence in Medicine, pp. 146-169, Jun. 1999.
[43] John, George H., Ron Kohavi, and Karl Pfleger, “Irrelevant features and the subset selection
problem,” Proceedings of the 11th International Conference on Machine Learning, pp.
121-29, 1994.
[44] J. Kennedy & R. Eberhart, “Particle swarm optimization,” Proceedings of the IEEE
International Conference on Neural Networks, vol. 4, pp. 1942-1948, Nov. 1995.
[45] F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to particle swarm
optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225-239,
Jun. 2004.
[46] B. Liu, L. Wang, Y. H. Jin, F. Tang, and D. X. Huang, “Improved particle swarm
optimization combined with chaos,” Chaos, Solitons & Fractals, vol. 25, no. 5, pp.
1261-1271, Sep. 2005.
[47] D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,” IEEE
Swarm Intelligence Symposium, pp. 120-127, Apr. 2007.
[48] M. Clerc, Back to random topology, Feb. 2007. [Online]. Available:
http://clerc.maurice.free.fr/pso/random_topology.pdf [Accessed: Mar. 26, 2013].
[49] Dr. William and H. Wolberg, Breast Cancer Wisconsin (Original) Data Set, Jul. 1992.
[Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Original) [Accessed:
May 14, 2013].
[50] J. Schlimmer, Congressional Voting Records Data Set, Apr. 1987. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records [Accessed: May 14,
2013].
[51] D. Gil and J. L. Girela, Fertility Data Set, Jan. 2013. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Fertility [Accessed: May 14, 2013].
[52] S. Salzberg, Echocardiogram Data Set, Feb. 1989. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Echocardiogram [Accessed: May 14, 2013].
[53] M. Little, Parkinsons Data Set, Jun. 2008. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Parkinsons [Accessed: May 14, 2013].
[54] R. A. Fisher, Iris Data Set, Jul. 1988. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Iris [Accessed: May 14, 2013].
[55] Z. Q. Hong and J. Y. Yang, Lung Cancer Data Set, May 1992. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Lung+Cancer [Accessed: May 14, 2013].
[56] M. Forina et al., Wine Data Set, Jul. 1991. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Wine [Accessed: May 14, 2013].
[57] Barbara and F. Hayes-Roth, Hayes-Roth Data Set, Mar. 1989. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Hayes-Roth [Accessed: May 14, 2013].
[58] JP Marques de Sá, Breast Tissue Data Set, May 2010. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Breast+Tissue [Accessed: May 14, 2013].
[59] R. Forsyth, Zoo Data Set, May 1990. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/Zoo [Accessed: May 14, 2013].
[60] J. Weston and C. Watkins, “Multi-class support vector machines,” Royal Holloway,
University of London, Department of Computer Science, Tech. Report. CSD-TR-98-04,
1998.
[61] M. Ashraf, K. Le, and X. Juang, “Iterative weighted k-NN for constructing missing
feature values in Wisconsin breast cancer dataset,” 2011 3rd International Conference
on Data Mining and Intelligent Information Technology Applications, pp. 23-27, Oct.
2011.
[62] G. I. Salama, M. B. Abdelhalim, and M. A. Zeid, “Breast cancer diagnosis on three
different datasets using multi-classifiers,” International Journal of Computer and
Information Technology, vol. 1, no. 1, pp. 36-43, Sep. 2012.
[63] A. Marcano-Cedeño, J. Quintanilla-Domínguez, and D. Andina, “WBCD breast cancer
database classification applying artificial metaplasticity neural network,” Expert
Systems with Applications, vol. 38, no. 8, pp. 9573-9579, Aug. 2011.
[64] M. Karabatak, and M. Cevdet Ince, “An expert system for detection of breast cancer
based on association rules and neural network,” Expert Systems with Applications, vol.
36, no. 2, pp. 3465-3469, Mar. 2009.
[65] T. Kiyan and T. Yildirim, “Breast cancer diagnosis using statistical neural networks,”
IU-Journal of Electrical & Electronics Engineering, vol. 4, no. 2, pp. 1149-1153, Jun.
2004.
[66] K. M. Salama and A. A. Freitas, “ABC-Miner: An ant-based bayesian classification
algorithm,” Lecture Notes in Computer Science, vol. 7461, pp. 13-24, Sep. 2012.
[67] M. Grochowski and W. Duch, “Fast projection pursuit based on quality of projected
clusters,” Lecture Notes in Computer Science, vol. 6594, pp. 89-97, Apr. 2011.
[68] L. Li, “Perceptron learning with random coordinate descent,” Computer Science
Technical Report CaltechCSTR:2005.006, California Institute of Technology, Aug.
2005.
[69] D. Gil, J. L. Girela, J. D. Juan, M. J. Gomez-Torres, and M. Johnsson, “Predicting seminal
quality with artificial intelligence methods,” Expert Systems with Applications, vol. 39, no.
16, pp. 12564-12573, Nov. 2012
[70] M. Sebban, R. Nock, S. Lallich, E. Brodley, and A. Danyluk, “Stopping Criterion for
Boosting-Based Data Reduction Techniques: from Binary to Multiclass Problems,” Machine
Learning Research, vol. 3, no. 4, pp. 863-885, 2002.
[71] G. Melli, A Lazy Model-Based Approach to On-Line Classification, Simon Fraser University,
Apr. 1998.
[72] Z. H. Zhou and X. Y. Liu, “Training cost-sensitive neural networks with methods addressing
the class imbalance problem,” IEEE Transactions on Knowledge and Data Engineering, vol.
18, no. 1, Jan. 2006.
[73] F. Divina and E. Machiori, “Handling continuous attributes in an evolutionary inductive
learner,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 1, Feb. 2005.
[74] M. A. Little, P. E. McSharry, S. J. Roberts, D. AE Costello, and I. M. Moroz, “Exploiting
Nonlinear Recurrence and Fractal Scaling Properties for Voice Disorder Detection,”
BioMedical Engineering OnLine, vol. 6, no. 23, Jun. 2007.
[75] P. D. Acton and A. Newberg, “Artificial neural network classifier for the diagnosis of
Parkinson’s disease using [99mTc]TRODAT-1 and SPECT,” Physics in Medicine and Biology,
vol. 51, no. 12, Jun. 2006.
[76] M. Fallahnezhad, M. H. Moradi, and S. Zaferanlouei, “A hybrid higher order neural
classifier for handling classification problems,” Expert Systems with Applications, vol. 38, no.
1, pp. 386-393, Jan. 2011.
[77] S. J. Wang, A. Mathew, Y. Chen, L. F. Xi, L. Ma, and J. Lee, “Empirical analysis of support
vector machine ensemble classifiers,” Expert Systems with Applications, vol. 36, no. 3, pp.
6466-6476, Apr. 2009.
[78] R. Maclin and D. Opitz, “Popular ensemble methods: an empirical study,” Journal of
Artificial Intelligence Research, vol. 11, pp. 169-198, Aug. 1999.
[79] J. J. Rodriguez and L. I. Kuncheva, “Rotation forest: a new classifier ensemble method,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp.
1619-1630, Oct. 2006.
[80] Z. Q. Hong and J. Y. Yang, “Optimal Discriminant Plane for a Small Number of Samples
and Design Method of Classifier on the Plane,” Pattern Recognition, vol. 24, no. 4, pp.
317-324, 1991.
[81] L. Yu and H. Liu, “Feature selection for high-dimensional data: A fast correlation-based
filter solution,” Proceedings of the 12th International Conference on Machine Learning,
vol. 2, no. 2, p. 856, 2003.
[82] H. Mohamadi, J. Habibi, M. S. Abadeh, and H. Saadi, “Data mining with a simulated
annealing based fuzzy classification system,” Pattern Recognition, vol. 41, no.5, pp.
1824-1833, May 2008.
[83] G.H. John and P. Langley, “Estimating continuous distributions in Bayesian classifiers,”
Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, pp. 338–
345, Aug. 1995.
[84] J.R. Quinlan, C4.5: Programs for machine learning, Morgan Kaufman Publishers Inc.,
1993.
[85] J. Bacardit and J.M. Garrell, “Bloat control and generalization pressure using the
minimum description length principle for a pittsburgh approach learning classifier
system,” Proceedings of the 2003-2005 International Conference on Learning
Classifier Systems, pp. 59-79, 2007.
[86] Y. Jiang and Z. H. Zhou, “Editing Training Data for kNN classifiers with neural
network ensemble,” Lecture Notes in Computer Science, vol. 3173, pp. 356-361, Aug.
2004.
[87] J. Estrela da Silva, Dr J. P. Marques de Sá, and J. Jossinet, “Classification of breast
tissue by electrical impedance spectroscopy,” Medical and Biological Engineering and
Computing, vol. 38, no. 1, pp. 26-30, Jan. 2000.
[88] Y. |