博碩士論文 110521079 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:17 、訪客IP:3.144.84.61
姓名 鄭元皓(Yuan-Hao Cheng)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 學習使用者意圖於中文醫療問題生成式摘要
(Learning User Intents for Abstractive Summarization of Chinese Medical Questions)
相關論文
★ 多重嵌入增強式門控圖序列神經網路之中文健康照護命名實體辨識★ 基於腦電圖小波分析之中風病人癲癇偵測研究
★ 基於條件式生成對抗網路之資料擴增於思覺失調症自動判別★ 標籤圖卷積增強式超圖注意力網路之中文健康照護文本多重分類
★ 運用合成器混合注意力改善BERT模型於科學語言編輯★ 強化領域知識語言模型於中文醫療問題意圖分類
★ 管道式語言轉譯器 之中文健康照護開放資訊擷取★ 運用句嵌入向量重排序器 增進中文醫療問答系統效能
★ 利用雙重註釋編碼器於中文健康照護實體連結★ 聯合詞性與局部語境於中文健康照護實體關係擷取
★ 運用異質圖注意力網路於中文醫療答案擷取式摘要★ 標籤強化超圖注意力網路模型於精神疾病文本多標籤分類
★ 上下文嵌入增強異質圖注意力網路模型 於心理諮詢文本多標籤分類★ 基於階層式聚類注意力之編碼解碼器於醫療問題多答案摘要
★ 探索門控圖神經網路於心理諮詢文字情感強度預測
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-10-12以後開放)
摘要(中) 生成式摘要任務的目標是將一篇長文本精簡地濃縮成語意相同,且保留主要資訊的 重點摘要,可以應用於眾多情境,例如:製作新聞標題、學術論文摘要、自動化報告生 成與問答聊天機器人等。本研究的主要目標檢索式醫療問答系統的問題理解,使用者醫 療問題多數存在過多不必要的信息,導致檢索系統的問答匹配精準性下降,因此,我們 開發生成式摘要技術做為問題理解的解決方案,產生對應使用者醫療問題的摘要問句,, 用以輸入檢索式醫療問答系統,改善撈取相關答案的匹配性。我們提出一個基於意圖的 醫療問題摘要 (Intent-based Medical Question Summarization, IMQS) 模型,包含實體辨 識器擷取原問句的醫療實體,然後使用實體提示方式加入原始問句,做為摘要模型的輸 入句,共同學習問題意圖分類與摘要任務,微調於摘要語言模型的編碼器與解碼器,藉 以生成對實體有更加關注且與保留原問句意圖的摘要。
我們透過網路爬蟲蒐集醫聯網醫師諮詢平台的民眾提問,篩選合適的問題進行醫療 實體標記、問題意圖標記、以及問題摘要標記,最終建置一組醫療問題摘要資料集 Med- QueSumm,包含 2,468 個中文醫療問題,原問句平均約 110 個字元及 7.75 個實體,以及 6 個定義好的意圖種類(病症、藥物、科室、治療、檢查、資訊)其中之一,摘要問句平均 約 45 個字元,長度約為原問句的 40%。藉由實驗結果與 IMQS 模型分析得知,我們提 出的模型在摘要任務達到最好的 ROUGE-1 69.59% 、ROUGE-2 51.32% 、ROUGE-L 61.69% 與 BERTScore 64.08% , 比 相 關 研 究 模 型 (BERTSum-abs, PEGASUS, ProphetNet, CPT, BART, GSum, SpanCopy)等有更好的摘要效能,且在意圖分類上也達到 Micro-F1 85.54%。整題而言,IMQS 模型為兼具摘要品質與意圖分析的中文醫療問題摘 要方法。
摘要(英) The goal of the generative summarization task is to condense a long text into a shorter summary while retaining the main information and key contents. The main objective of this research is to understand medical problems through summarization techniques. In retrieval- based question-answering systems, users’ medical questions may contain unnecessary information that hinders the retrieval performance. Therefore, we focus on developing a generative summarization model called IMQS (Intent-based Medical Question Summarization) to create corresponding question summaries. First, we use an entity recognizer to extract the medical entities of an original question and design an entity prompt to formulate the input question to our summarization model. Then, joint learning of question intents and summaries to fine-tune the encoder and decoder in the language model. Finally, we can obtain more attention to medical entities and retain the intent of an original question in the generated summary.
We collected users’ questions from a physician consultation platform: MedNet and selected suitable ones for entity tagging, intent labeling, and question summarization, resulting in a dataset called Med-QueSumm. We have a total of 2,468 Chinese medical questions, each with an average of about 110 characters and 7.75 entities, while the summarized questions are around 45 characters, accounting for near 40% of original questions. In addition, each question is annotated to one of six intent categories: symptoms, drugs, departments, treatments, examinations, and information. Experimental results and model analysis show that our IMQS model achieves the best ROUGE-1/-2/-L of 69.59/51.32/61.69 and a BERTScore of 64.08 in the summarization task, outperforming other related models including BERTSum-abs, PEGASUS, ProphetNet, CPT, BART, GSum, and SpanCopy. Besides, our IMQS model obtained the best micro-F1 score of 85.54 in intent classification. Overall, it’s an effective summarization method for Chinese medical questions.
關鍵字(中) ★ 生成式摘要
★ 序列到序列
★ 預訓練語言模型
關鍵字(英) ★ abstractive summarization
★ sequence to sequence
★ pre-trained language model
論文目次 摘要 i
Abstract ii
誌謝 iii
目錄 iv
圖目錄 vi
表目錄 vii
第一章 緒論 1
1-1 研究背景 1
1-2 研究動機 2
1-3 研究目的 3
1-4 章節概要 4
第二章 相關研究 5
2-1 摘要模型 5
2-2 摘要資料集 15
第三章 模型架構 17
3-1 模型架構 17
3-2 實體辨識器 20
3-3 編碼器 22
3-4 解碼器 24
第四章 實驗與效能評估 28
4-1 資料集建置 28
4-2 評估指標 32
4-3 實驗設定 38
4-4 意圖分類比較 40
4-5 問題摘要比較 46
4-6 IMQS模型分析 53
4-6-1 超參數分析 53
4-6-2 實體辨識效能分析 54
4-6-3 語言模型比較 56
4-6-4 消融實驗 58
4-7 人工評估 60
第五章 研究結論 67
5-1 結論 67
5-2 研究限制 68
5-3 未來工作 69
參考資料 70
參考文獻 [1] A. M. Rush, S. Chopra, and J. Weston, "A Neural Attention Model for Abstractive Sentence Summarization," Lisbon, Portugal, September 2015: Association for Computational Linguistics, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 379-389, doi: 10.18653/v1/D15-1044. [Online]. Available: https://aclanthology.org/D15-1044
https://doi.org/10.18653/v1/D15-1044
[2] S. Chopra, M. Auli, and A. M. Rush, "Abstractive Sentence Summarization with Attentive Recurrent Neural Networks," San Diego, California, June 2016: Association for Computational Linguistics, in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 93-98, doi: 10.18653/v1/N16-1012. [Online]. Available: https://aclanthology.org/N16-1012
https://doi.org/10.18653/v1/N16-1012
[3] A. See, P. J. Liu, and C. D. Manning, "Get To The Point: Summarization with Pointer-Generator Networks," Vancouver, Canada, July 2017: Association for Computational Linguistics, in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1073-1083, doi: 10.18653/v1/P17-1099. [Online]. Available: https://aclanthology.org/P17-1099
https://doi.org/10.18653/v1/P17-1099
[4] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
[5] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
[6] Y. Liu and M. Lapata, "Text Summarization with Pretrained Encoders," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 3730-3740.
[7] M. Lewis et al., "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension," arXiv preprint arXiv:1910.13461, 2019.
[8] J. Zhang, Y. Zhao, M. Saleh, and P. J. Liu, "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization," arXiv preprint arXiv:1912.08777, 2019.
[9] W. Qi et al., "ProphetNet: Predicting Future N-gram for Sequence-to-SequencePre-training," in Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp. 2401-2410.
[10] W. Qi et al., "ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation," Online, August 2021: Association for Computational Linguistics, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pp. 232-239, doi: 10.18653/v1/2021.acl-demo.28. [Online]. Available: https://aclanthology.org/2021.acl-demo.28
https://doi.org/10.18653/v1/2021.acl-demo.28
[11] Y. Shao et al., "Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation," arXiv preprint arXiv:2109.05729, 2021.
[12] Z.-Y. Dou, P. Liu, H. Hayashi, Z. Jiang, and G. Neubig, "GSum: A General Framework for Guided Neural Abstractive Summarization," Online, June 2021: Association for Computational Linguistics, in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4830-4842, doi: 10.18653/v1/2021.naacl-main.384. [Online]. Available: https://aclanthology.org/2021.naacl-main.384
https://doi.org/10.18653/v1/2021.naacl-main.384
[13] Y. Zhang, X. Zhang, X. Wang, S.-q. Chen, and F. Wei, "Latent Prompt Tuning for Text Summarization," arXiv preprint arXiv:2211.01837, 2022.
[14] W. Xiao and G. Carenini, "Entity-based spancopy for abstractive summarization to improve the factual consistency," arXiv preprint arXiv:2209.03479, 2022.
[15] A. Ben Abacha and D. Demner-Fushman, "On the Summarization of Consumer Health Questions," Florence, Italy, July 2019: Association for Computational Linguistics, in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2228-2234, doi: 10.18653/v1/P19-1215. [Online]. Available: https://aclanthology.org/P19-1215
https://doi.org/10.18653/v1/P19-1215
[16] G. Zeng et al., "MedDialog: Large-scale Medical Dialogue Datasets," Online, November 2020: Association for Computational Linguistics, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 9241-9250, doi: 10.18653/v1/2020.emnlp-main.743. [Online]. Available: https://aclanthology.org/2020.emnlp-main.743
https://doi.org/10.18653/v1/2020.emnlp-main.743
[17] C. Xu, J. Pei, H. Wu, Y. Liu, and C. Li, "MATINF: A Jointly Labeled Large-Scale Dataset for Classification, Question Answering and Summarization," Online, July 2020: Association for Computational Linguistics, in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3586-3596, doi: 10.18653/v1/2020.acl-main.330. [Online]. Available: https://aclanthology.org/2020.acl-main.330
https://doi.org/10.18653/v1/2020.acl-main.330
[18] S. Yadav, D. Gupta, and D. Demner-Fushman, "Chq-summ: A dataset for consumer healthcare question summarization," arXiv preprint arXiv:2206.06581, 2022.
[19] N. Salek Faramarzi, M. Patel, S. H. Bandarupally, and R. Banerjee, "Context-aware Medication Event Extraction from Unstructured Text," Toronto, Canada, July 2023: Association for Computational Linguistics, in Proceedings of the 5th Clinical Natural Language Processing Workshop, pp. 86-95, doi: 10.18653/v1/2023.clinicalnlp-1.11. [Online]. Available: https://aclanthology.org/2023.clinicalnlp-1.11
https://doi.org/10.18653/v1/2023.clinicalnlp-1.11
[20] N. Chen, X. Su, T. Liu, Q. Hao, and M. Wei, "A benchmark dataset and case study for Chinese medical question intent classification," BMC Medical Informatics and Decision Making, vol. 20, no. 3, pp. 1-7, 2020.
[21] C.-Y. Lin, "ROUGE: A Package for Automatic Evaluation of Summaries," Barcelona, Spain, July 2004: Association for Computational Linguistics, in Text Summarization Branches Out, pp. 74-81. [Online]. Available: https://aclanthology.org/W04-1013. [Online]. Available: https://aclanthology.org/W04-1013
[22] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, "Bertscore: Evaluating text generation with bert," arXiv preprint arXiv:1904.09675, 2019.
[23] Y. Cui, W. Che, T. Liu, B. Qin, and Z. Yang, "Pre-training with whole word masking for chinese bert," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 3504-3514, 2021.
[24] Y. Liu et al., "Roberta: A robustly optimized bert pretraining approach," arXiv preprint arXiv:1907.11692, 2019.
[25] Y. He, Z. Zhu, Y. Zhang, Q. Chen, and J. Caverlee, "Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition," Online, November 2020: Association for Computational Linguistics, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4604-4614, doi: 10.18653/v1/2020.emnlp-main.372. [Online]. Available: https://aclanthology.org/2020.emnlp-main.372
https://doi.org/10.18653/v1/2020.emnlp-main.372
[26] T. G. Dietterich, "Approximate statistical tests for comparing supervised classification learning algorithms," Neural computation, vol. 10, no. 7, pp. 1895-1923, 1998.
[27] A. H. Bowker, "A test for symmetry in contingency tables," Journal of the american statistical association, vol. 43, no. 244, pp. 572-574, 1948.
[28] Z. Zhao et al., "UER: An Open-Source Toolkit for Pre-training Models," Hong Kong, China, November 2019: Association for Computational Linguistics, in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pp. 241-246, doi: 10.18653/v1/D19-3041. [Online]. Available: https://aclanthology.org/D19-3041
https://doi.org/10.18653/v1/D19-3041
[29] R. Dror, G. Baumer, S. Shlomov, and R. Reichart, "The Hitchhiker’s Guide to Testing Statistical Significance in Natural Language Processing," Melbourne, Australia, July 2018: Association for Computational Linguistics, in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1383-1392, doi: 10.18653/v1/P18-1128. [Online]. Available: https://aclanthology.org/P18-1128
https://doi.org/10.18653/v1/P18-1128
[30] T. Berg-Kirkpatrick, D. Burkett, and D. Klein, "An Empirical Investigation of Statistical Significance in NLP," Jeju Island, Korea, July 2012: Association for Computational Linguistics, in Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 995-1005. [Online]. Available: https://aclanthology.org/D12-1091. [Online]. Available: https://aclanthology.org/D12-1091
[31] B. Efron and R. J. Tibshirani, An introduction to the bootstrap. CRC press, 1994.
指導教授 李龍豪(Lung-Hao Lee) 審核日期 2023-10-13
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明