博碩士論文 108522098 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator陳大富zh_TW
DC.creatorTa-Fu Chenen_US
dc.date.accessioned2021-8-24T07:39:07Z
dc.date.available2021-8-24T07:39:07Z
dc.date.issued2021
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=108522098
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract近年來在自然語言處理領域的研究,皆漸漸的轉往使用大型預訓練語言模型,在開放領域的問答系統也不例外。大型預訓練語言模型為問答系統帶來了強大的理解能力與答案抽取能力,但隨之而來的是其龐大參數量,所帶來的緩慢推理速度,再加上實際應用時模型需要處理的內容數量不固定而導致體驗不佳的問題。本論文提出一個中文開放領域問答系統其中加入Reranking的機制,對於要進入問答模型的文章改以段落為單位並進行語意層面的篩選,不但可提供傳統檢索器所缺乏的語意資訊外,更可以藉此有效的減少並控制進入問答模型的段落數量,以達到降低問答模型被誤導的可能性,並大幅提升系統給出答案的反應速度。 開放領域的問答中其問答範圍是不設限在特定領域的,所以在實際應用時勢必會遇到許多訓練時不曾見過的樣本,因此問答模型必須具備有非常良好的泛化能力,才能有較佳的表現。並且在使用問答系統時使用者提出的問題時常會帶有口語化的人類習慣,這樣的特性與訓練資料集中,相較之下較為規矩的問句格式有些差異。因此本論文提出了一套用於中文問答的方法,包括對訓練資料的處理與訓練時的方式。對於資料的處理,目標在於利用現有的資料集進行調整與組合等,以提高接受問題類型的能力。對於訓練的方式,目標在於利用調整訓練時的樣本長度等,以提高模型對不同長度的適應性。藉由上述方法可提升模型的泛化能力,並使其對於口語化的問答有較良好的接受度,進而提升模型在給予答案時的精確度。zh_TW
dc.description.abstractIn recent years, research in natural language processing has gradually shifted to the use of large-scale pre-trained language models. The open-domain question answering system is no exception. Large-scale pre-trained language models bring powerful understanding and answer extraction capabilities to the question answering system. But what follows is the slow inferencing speed brought by its huge amount of parameters. Coupled with the fact that the amount of content that the model needs to deal with is not fixed in actual application, it leads to the problem of poor experience. This paper proposes a Chinese open-domain question answering system which incorporates Reranking mechanism. For articles that want to enter the question answering (Q&A) model, change them to paragraphs and screen them at the semantic level. Not only can provide semantic information that traditional document retrievers lack, but also can effectively reduce and control the number of paragraphs entering the Q&A model. In order to reduce the possibility of the Q&A model being misled, and greatly improve the response speed of the system to give answers. The scope of question in the open-domain question answering system is not limited to a specific domain. Hence, in actual application, it will inevitably encounter many samples that have not been seen during training. Therefore, the question answering model must have a very good generalization ability in order to have better performance. And when using the question and answer system, the questions asked by the user often have a colloquial human habit. This feature is somewhat different from the more regular question format in the training data set. Therefore, this paper proposes a set of methods for Chinese question answering, including the processing of training data and the way of training. The goal of training data processing is to use existing data sets for adjustments and combinations, etc., to improve the ability to accept problem types. And the goal of the training method is to adjust the sample length during training to improve the adaptability to different types of question. Through the above methods, the generalization ability of the model can be improved, and it has a better acceptance of the colloquial question. And then improve the accuracy of the model when giving answers.en_US
DC.subject問答系統zh_TW
DC.subject開放領域zh_TW
DC.subject開放領域問答系統zh_TW
DC.subject檢索再評分zh_TW
DC.subject預訓練zh_TW
DC.subjectQuestion Answering Systemen_US
DC.subjectOpen-domainen_US
DC.subjectOpen-domain Question Answering Systemen_US
DC.subjectRetrieval rerankingen_US
DC.subjectPre-trainingen_US
DC.title基於預訓練模型與再評分機制之開放領域中文問答系統zh_TW
dc.language.isozh-TWzh-TW
DC.titleOpen Domain Chinese Question Answering System based on Pre-training Model and Retrieval Rerankingen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明