中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86637
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 41646993      在线人数 : 2329
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86637


    题名: 結合Selective Mechanism與多向注意力機制應用於自動文本摘要之研究
    作者: 董子瑄;Tung, Tzu-Hsuan
    贡献者: 資訊管理學系
    关键词: Transformer;Selective Mechanism;自注意力機制;萃取式摘要;中文文本摘要;Transformer;Selective mechanism;Self-attention;Abstractive summarization;Chinese summarization
    日期: 2021-08-02
    上传时间: 2021-12-07 13:02:48 (UTC+8)
    出版者: 國立中央大學
    摘要: 文本摘要任務的目的在於將原始文本以精簡的文字重新呈現,同時要保留重點且不失原文語意。本研究結合Selective Mechanism與Transformer模型中的多向注意力機制以提升萃取式摘要模型的生成摘要品質,透過一個可訓練的Selective Gate Network對Transformer編碼器的多向注意力輸出進行過濾,產生二次潛在語意向量,以達到精煉的效果,其目的在於以過濾的方式,除去次要的資訊,萃取出應保留在摘要中的重點資訊,並使用二次潛在語意向量進行解碼,來產生更好的摘要。
    本研究並將此模型應用於中文文本摘要生成上,以ROUGE值做為評估指標,實驗結果顯示此模型在ROUGE-1、ROUGE-2、ROUGE-L都能超越Baseline模型,在Word-based ROUGE上提升約7.3~12.7%,在Character-based ROUGE上提升約4.9~7.9%,此外搭配Word to Character的斷詞方法並擴大編碼器更可以大幅提升各項ROUGE指標,在Word-based ROUGE可再提升20.4~41.8%,Character-based ROUGE可再提升約21.5~31.1%。;Text summarization task aims to represent the original article in condensed text, while retaining the key points and the original semantics. This research combines selective mechanism with multi-head attention to improve the generated summary quality of the abstractive summarization model. A trainable selective gate network is used to filter the multi-head attention outputs in the Transformer encoder, which can select important information and discard unimportant information, and finally construct second level representation. The second level representation is a tailored sentence representation, which can be decoded into a better summary.
    This model is applied to Chinese text summarization task, and the evaluation metric is ROUGE score. The experiment result shows that the model performance exceed the baseline by 7.3 to 12.7% on word-based ROUGE, and 4.9 to 7.9% on character-based ROUGE. Moreover, with word to character tokenization and larger vocabulary banks can significantly improve the performance. In word-based ROUGE, it can increase by 20.4 to 41.8%, and character-based ROUGE can increase by 21.5 to 31.1%.
    显示于类别:[資訊管理研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML81检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明