參考文獻 |
Berger, M. J. (2015). Large scale multi-label text classification with semantic word vectors. Technical report, Stanford University.
Boutell, M. R., Luo, J., Shen, X., & Brown, C. M. (2004). Learning multi-label scene classification. Pattern Recognition, 1757-1771
Bruna, J., Zaremba, W., Szlam, A., & LeCun, Y. (2013). Spectral networks and locally connected networks on graphs. arXiv preprint, arXiv:1312.6203.
Chen, D., O’Bray, L., & Borgwardt, K. (2022). Structure-aware transformer for graph representation learning. Proceedings of the 39th International Conference on Machine Learning, 3469-3489
Clare, A., & King, R. D. (2001). Knowledge discovery in multi-label phenotype data. European Conference on Principles of Data Mining and Knowledge Discovery, 42-53
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 37-46
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 273-297
Defferrard, M., Bresson, X., & Vandergheynst, P. (2016). Convolutional neural networks on graphs with fast localized spectral filtering. Proceedings of the 30th International Conference on Neural Information Processing Systems, 3844–3852
Deng, Y. C., Tsai, C. Y., Wang, Y. R., Chen, S. H., & Lee, L. H. (2022). Predicting Chinese Phrase-Level Sentiment Intensity in Valence-Arousal Dimensions With Linguistic Dependency Features. IEEE Access, 126612-126620
Deng, Y. C., Wang, Y. R., Chen, S. H., & Lee, L. H. (2023). Toward Transformer Fusions for Chinese Sentiment Intensity Prediction in Valence-Arousal Dimensions. IEEE Access, 109974-109982
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171-4186
Elisseeff, A., & Weston, J. (2001). A kernel method for multi-labelled classification. Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, 681–687
Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 378
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 1735-1780
Holmes, T. H., & Rahe, R. H. (1967). The social readjustment rating scale. Journal of Psychosomatic Research, 213-218
Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2017). Bag of Tricks for Efficient Text Classification. In M. Lapata, P. Blunsom, & A. Koller, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, 427-431
Kim, Y. (2014). Convolutional Neural Networks for Sentence Classification. In A. Moschitti, B. Pang, & W. Daelemans, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1746-1751
Kipf, T. N., & Welling, M. (2017). Semi-supervised classification with graph convolutional networks. arXiv preprint, arXiv:1609.02907.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 84–90
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 159-174
Li, Q., Peng, H., Li, J., Xia, C., Yang, R., Sun, L., Yu, P. S., & He, L. (2022). A Survey on Text Classification: From Traditional to Deep Learning. ACM Transactions on Intelligent Systems and Technology, Article 31
Li, Y., Tarlow, D., Brockschmidt, M., & Zemel, R. (2015). Gated graph sequence neural networks. arXiv preprint, arXiv:1511.05493.
Liu, P., Qiu, X., & Huang, X. (2016). Recurrent neural network for text classification with multi-task learning. Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 2873–2879
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint, arXiv:1301.3781.
Nguyen, D. Q., Nguyen, T. D., & Phung, D. (2022). Universal Graph Transformer Self-Attention Networks. Companion Proceedings of the Web Conference 2022, 193–196
Pennington, J., Socher, R., & Manning, C. D. (2014). Glove: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532-1543
Quinlan, J. R. (2014). C4. 5: programs for machine learning. The Morgan Kaufmann Series in Machine Learning.
Rampášek, L., Galkin, M., Dwivedi, V. P., Luu, A. T., Wolf, G., & Beaini, D. (2022). Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 14501-14515
Read, J., Pfahringer, B., Holmes, G., & Frank, E. (2011). Classifier chains for multi-label classification. Machine Learning, 333-359
Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2009). The Graph Neural Network Model. IEEE Transactions on Neural Networks, 61-80
Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., & Sinop, A. K. (2023). Exphormer: Sparse transformers for graphs. Proceedings of the 40th International Conference on Machine Learning, 31613-31632
Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining Multi-label Data. In O. Maimon & L. Rokach (Eds.), Data Mining and Knowledge Discovery Handbook, 667-685
Van Nguyen, M., Lai, V. D., Veyseh, A. P. B., & Nguyen, T. H. (2021). Trankit: A light-weight transformer-based toolkit for multilingual natural language processing. arXiv preprint, arXiv:2101.03289.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems 30 (NIPS 2017).
Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., & Bengio, Y. (2018). Graph Attention Networks. arXiv preprint, arXiv:1710.10903.
Yao, L., Mao, C., & Luo, Y. (2019). Graph convolutional networks for text classification. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, 7370-7377
Zhang, M.-L., & Zhou, Z.-H. (2007). ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognition, 2038-2048 |