DC 欄位 |
值 |
語言 |
DC.contributor | 軟體工程研究所 | zh_TW |
DC.creator | 張景泰 | zh_TW |
DC.creator | Ching-Tai Chang | en_US |
dc.date.accessioned | 2023-2-2T07:39:07Z | |
dc.date.available | 2023-2-2T07:39:07Z | |
dc.date.issued | 2023 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=109525007 | |
dc.contributor.department | 軟體工程研究所 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 摘要中的事實不一致性代表摘要中的訊息無法從來源文章中獲得驗證,是抽象式摘要中棘手的問題,研究顯示模型產出的摘要有30\%擁有事實不一致的問題,使得抽象式摘要難以應用在生活中,近幾年研究者也開始重視這個問題。
過去的方法傾向於提供額外的背景知識,將其融入於模型中,或者在模型解碼後對產出的結果進行檢查及更正。
對比學習是近幾年新的模型訓練方法,它在影像領域取得了卓越的成績,其概念是利用正樣本、負樣本之間的對比性,使得模型學習出來的向量物以類聚,正樣本經過模型得到的向量彼此間的距離會較貼近,負樣本經過模型得到的向量彼此間的距離會較疏遠。如此模型在一定程度上擁有了區分不同事物的能力。
在我們的研究中,首先對原始文章找出與摘要每一句最相關的句子,接著對編碼器使用了對比學習方法使得編碼過後的向量可以獲得與摘要更加相關的原始文章向量使得解碼器產出的摘要更符合事實一致。 | zh_TW |
dc.description.abstract | Hallucination, also known as factual inconsistency, is when models generate summaries that contain incorrect information or information not mentioned in source text.
It is a critical problem in abstractive summarization and makes summaries generated by models hard to use in practice.
Previous works prefer to add additional information such as background knowledge into the model or use post-correct/rank method after decoding to improve this headache.
Contrastive learning is a new model-training method and has achieved excellent results in the Image Processing field. The concept is to use the contrast between positive and negative samples to make vectors learned by the model cluster together. Given the anchor point, the distance between the anchor point and the positive samples will be closer, and the distance between the anchor point and the negative samples will be farther. This way, the model has the ability to distinguish positive examples from negative examples to a certain extent.
We propose a new method to improve factual consistency by separating representation of the most relevant sentences and the least relevant sentences from the source document during the training phase through contrastive learning so that the model can learn how to generate summaries that are more relevant to the main points of the source documents. | en_US |
DC.subject | 抽象式摘要 | zh_TW |
DC.subject | 預訓練模型 | zh_TW |
DC.subject | 對比學習 | zh_TW |
DC.subject | 事實一致性 | zh_TW |
DC.subject | Abstractive Summarization | en_US |
DC.subject | Pre-trained Model | en_US |
DC.subject | Factual Inconsistency | en_US |
DC.subject | Hallucination | en_US |
DC.subject | Contrastive Learning | en_US |
DC.title | 利用與摘要相關的文章重點句結合對比學習改進摘要模型的事實一致性 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | Combining Key Sentences Related to the Abstract with Contrastive Learning to Improve Summarization Factual Inconsistency | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |