參考文獻 |
Maron, M. E., (1961). Automatic indexing: an experimental inquiry. Journal of the ACM (JACM), 8(3), 404-417.
Cover, T., & Hart, P. , (1967). Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1), 21-27.
Joachims, T., (1998, April). Text categorization with support vector machines: Learning with many relevant features. In European conference on machine learning (pp. 137-142). Springer, Berlin, Heidelberg.
Y. Kim, Convolutional neural networks for sentence classification, in EMNLP, Doha, Qatar, October 2014, pp. 1746-1751.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J., (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536.
Hochreiter, S., & Schmidhuber, J. , (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., & Huang, X., (2020). Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 1-26.
Chen, N., Su, X., Liu, T., Hao, Q., & Wei, M., (2020). A benchmark dataset and case study for Chinese medical question intent classification. BMC Medical Informatics and Decision Making, 20(3), 1-7.
N. B. Z. L. X. L. L. C. X. D. S. .. &. C. Q. Zhang, (2021). CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark. arXiv preprint arXiv:2106.08087.
Term frequency by inverse document frequency, in Encyclopedia of Database Systems, p. 3035, 2009.
Mikolov, T., Chen, K., Corrado, G., & Dean, J., (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Pennington, J., Socher, R., & Manning, C. D., (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I., (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008).
Huang, Z., Xu, W., & Yu, K., (2015). Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.
Elfaik, H., (2021). Deep Bidirectional LSTM Network Learning-Based Sentiment Analysis for Arabic Text. Journal of Intelligent Systems, 30(1), 395-412.
Kipf, T. N., & Welling, M., (2016). Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
Li, Q., Peng, H., Li, J., Xia, C., Yang, R., Sun, L., ... & He, L., (2020). A survey on text classification: From shallow to deep learning. arXiv preprint arXiv:2008.00364.
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. , (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Sun, Y., Wang, S., Li, Y., Feng, S., Chen, X., Zhang, H., ... & Wu, H. , (2019). Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223.
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V., (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V., (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., & Levy, O., (2020). Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8, 64-77.
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R., (2019). Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
Clark, K., Luong, M. T., Le, Q. V., & Manning, C. D., (2020). Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. R., (2018). GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
Xu, L., Hu, H., Zhang, X., Li, L., Cao, C., Li, Y., ... & Lan, Z., (2020). Clue: A chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986.
Lai, G., Xie, Q., Liu, H., Yang, Y., & Hovy, E., (2017). Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683.
Rajpurkar, P., Jia, R., & Liang, P., Know what you don′t know: Unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822.
Cui, Y., Che, W., Liu, T., Qin, B., Wang, S., & Hu, G., (2020). Revisiting pre-trained models for chinese natural language processing. arXiv preprint arXiv:2004.13922.
Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J. , (2020). BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4), 1234-1240.
He, Y., Zhu, Z., Zhang, Y., Chen, Q., & Caverlee, J., (2020). Infusing disease knowledge into BERT for health question answering, medical inference and disease name recognition. arXiv preprint arXiv:2010.03746.
Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J., & Tang, J., (2021). KEPLER: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9, 176-194.
Xiong, W., Du, J., Wang, W. Y., & Stoyanov, V., (2019). Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model. arXiv preprint arXiv:1912.09637.
Liu, W., Zhou, P., Zhao, Z., Wang, Z., Ju, Q., Deng, H., & Wang, P., (2020, April). K-bert: Enabling language representation with knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 03, pp. 2901-2908).
Lipscomb, C. E. (2000). , Medical subject headings (MeSH). Bulletin of the Medical Library Association, 88(3), 265.
Donnelly, K. (2006). , SNOMED-CT: The advanced terminology and coding system for eHealth. Studies in health technology and informatics, 121, 279.
Lui, “https://github.com/liuhuanyong/QASystemOnMedicalKG”.
Lee, L. H., & Lu, Y., (2021). Multiple Embeddings Enhanced Multi-Graph Neural Networks for Chinese Healthcare Named Entity Recognition. IEEE Journal of Biomedical and Health Informatics.
Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T., (2016). Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.
Dietterich, T. G., (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural computation, 10(7), 1895-1923.
Bowker, A. H., (1948). A test for symmetry in contingency tables. Journal of the american statistical association, 43(244), 572-574. |