參考文獻 |
參考文獻
[1] Mikolov, T., Sutskever, I., Chen, K., Corrado, G. s., and Dean, J. 2013. Distributed
representations of words and phrases and their compositionality. In proceedings of the
26th International Conference on Neural Information Processing Systems (Volume 2).
pages 3111–3119. https://dl.acm.org/doi/10.5555/2999792.2999959.
[2] Pennington, J., Socher, R., and Manning, C. 2014. GloVe: Global vectors for word
representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for
Computational Linguistics. https://aclanthology.org/D14-1162.
[3] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. 2019. BERT: Pre-training of deep
bidirectional transformers for language understanding. In Proceedings of the 2019
Conference of the North American Chapter of the Association for Computational
Linguistics: Human Language Technologies (Volume 1:Long and Short Papers).
Minneapolis, Minnesota. Association for Computational Linguistics.
https://aclanthology.org/N19-1423.
[4] Reimers, N., and Gurevych, I. 2019. Sentence-BERT: Sentence embeddings using
siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP). pages 3982–3992, Hong Kong, China.
Association for Computational Linguistics. https://aclanthology.org/D19-1410.
[5] Robertson, S.E., Walker, S., Jones S., Hancock-Beaulieu, M., and Gatford, M. 1995.
Okapi at TREC-3. In: Harman DK (ed) Proceedings of the third Text REtrieval
Conference (TREC-3), pages 109–126.
[6] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer,
L., and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining approach.
arXiv:1907.11692. https://doi.org/10.48550/arXiv.1907.11692.
[7] Wang, M., Smith, N., and Mitamura, T. 2007. What is the jeopardy model? A quasisynchronous grammar for qa. In Proceedings of the 2007 Joint Conference on Empirical
Methods in Natural Language Processing and Computational Natural Language
Learning (EMNLP-CoNLL), pages 22–32, Prague, Czech Republic. Association for
Computational Linguistics. https://aclanthology.org/D07-1003.
[8] Yang, Y., Yih, W., and Meek, C. 2015. WikiQA: A Challenge Dataset for Open-Domain
Question Answering. In Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing, pages 2013–2018, Lisbon, Portugal. Association for
Computational Linguistics. https://aclanthology.org/D15-1237.
[9] Bajaj, P., Campos, D., Craswell, N., Deng, L., Gao, J., Liu, X., Majumder, R.,
McNamara, A., Mitra, B., Nguyen, T., Rosenberg, M., Song, X., Stoica, A., Tiwary, S.,
and Wang, T. 2016. Ms marco: A human generated machine reading comprehension
dataset. arXiv:1611.09268. Version 3.
[10] Abacha, A. B., Shivade, C., and Demner-Fushman, D. 2019. Overview of the MEDIQA
2019 shared task on textual inference, question entailment and question answering.
In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 370–379,
Florence, Italy. Association for Computational Linguistics.
https://aclanthology.org/W19-5039.
[11] Demner-Fushman, D., Mrabet, Y., and Abacha, A. B. 2020. Consumer health
information and question answering: helping consumers find answers to their healthrelated information needs. Journal of the American Medical Informatics Association,
27(2):194-201. https://doi.org/10.1093/jamia/ocz152.
[12] Abacha, A. B., Agichtein, E., Pinter, Y., and Demner-Fushman, D. 2017. Overview of
the medical question answering task at TREC 2017 LiveQA. In Proceedings of the Text
Retrieval Conference (TREC).
[13] Garg, S., Vu, T., and Moschitti, A. 2020. Tanda: Transfer and adapt pre-trained
transformer models for answer sentence selection. In Proceedings of the AAAI
Conference on ArtificialIntelligence, 34(5):7780-7788.
https://doi.org/10.1609/aaai.v34i05.6282.
[14] Laskar, M. T. R., Huang, X., and Hoque, E. 2020. Contextualized embeddings based
transformer encoder for sentence similarity modeling in answer selection task.
In Proceedings of the 12th Language Resources and Evaluation Conference, pages
5505–5514, Marseille, France. European Language Resources Association.
https://aclanthology.org/2020.lrec-1.676.
[15] Ren, R., Qu, Y., Liu, J., Zhao, W. X., She, Q., Wu, H., Wang, H., and Wen, J. R. 2021.
RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking.
In Proceedings of the 2021 Conference on Empirical Methods in Natural Language
Processing, pages 2825–2835, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics. https://aclanthology.org/2021.emnlpmain.224.
[16] Gao, L., Dai, Z., and Callan, J. 2021. COIL: Revisit exact lexical match in information
retrieval with contextualized inverted list. In Proceedings of the 2021 Conference of the
North American Chapter of the Association for Computational Linguistics: Human
Language Technologies, pages 3030–3042, Online. Association for Computational
Linguistics. https://aclanthology.org/2021.naacl-main.241.
[17] Gao, L., and Callan, J. 2021. Unsupervised corpus aware language model pre-training
for dense passage retrieval. In Proceedings of the 60th Annual Meeting of the
Association for Computational Linguistics (Volume 1: Long Papers), pages 2843–2853,
Dublin, Ireland. Association for Computational Linguistics.
https://aclanthology.org/2022.acl-long.203.
[18] Gao, T., Yao, X., and Chen, D. 2021. SimCSE: Simple contrastive learning of sentence
embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural
Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics. https://aclanthology.org/2021.emnlpmain.552.
[19] Zhu, W., Zhou, X., Wang, K., Luo, X., Li, X., Ni, Y., and Xie, G. 2019. PANLP at
MEDIQA 2019: Pre-trained language models, transfer learning and knowledge
distillation. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 380–
388, Florence, Italy. Association for Computational Linguistics.
https://aclanthology.org/W19-5040.
[20] Cheng, Y., Fu, S., Tang, M., and Liu, D. 2019. Multi-task deep neural network (MTDNN) enabled optical performance monitoring from directly detected PDM-QAM
signals. Optics express, 27(13):19062-19074. https://doi.org/10.1364/OE.27.019062.
[21] Hearst, M. A., Dumais, S. T., Osuna, E., Platt, J., and Scholkopf, B. 1998. Support vector
machines. IEEE Intelligent Systems and their Applications, 13(4):18-28.
https://doi.org/10.1109/5254.708428.
[22] Demner-Fushman, D., Rogers, W., and Aronson, A. 2017. MetaMap Lite: an evaluation
of a new Java implementation of MetaMap, Journal of the American Medical
Informatics Association, 24(4):841-844. https://doi.org/10.1093/jamia/ocw177.
[23] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł.,
and Polosukhin, I. 2017. Attention is all you need. In Proceedings of the 31st
International Conference on Neural Information Processing Systems, Long Beach,
California, USA. https://dl.acm.org/doi/10.5555/3295222.3295349.
[24] Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P. and Soricut, R. 2020. ALBERT:
A lite BERT for self-supervised learning of language representations. arXiv:1909.11942.
Version 6.
[25] Cui, Y., Che, W., Liu, T., Qin, B., Wang, S., and Hu, G. 2020. Revisiting pre-trained
models for Chinese natural language processing. In Findings of the Association for
Computational Linguistics: EMNLP 2020, pages 657–668, Online. Association for
Computational Linguistics. https://doi.org/10.18653/v1/2020.findings-emnlp.58.
[26] Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein,
D., Polosukhin, I., Devlin, J., Lee, K., Toutanova, K., Jones, L., Kelcey, M., Chang, M.,
M. Dai, A., Uszkoreit, J., Le, Q., and Petrov, S. 2019. Natural questions: a benchmark
for question answering research. Transactions of the Association for Computational
Linguistics, 7:452–466. https://aclanthology.org/Q19-1026.
[27] Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. 2018. GLUE: A
multi-task benchmark and analysis platform for natural language understanding.
In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and
Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association
for Computational Linguistics. https://aclanthology.org/W18-5446. |