中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/83795
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78852/78852 (100%)
Visitors : 38562197      Online Users : 654
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/83795


    Title: 使用預訓練編碼器提升跨語言摘要能力;Improving Cross-Lingual Text Summarization using Pretrained Encoder
    Authors: 莊家閔;Chuang, Chia-Min
    Contributors: 軟體工程研究所
    Keywords: 文本摘要;預訓練模型;跨語言處理;Summarization;Pretraining language model;Cross-lingual
    Date: 2020-07-31
    Issue Date: 2020-09-02 17:07:01 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 跨語言文本摘要是透過機器將一種語言的文章轉換成另
    一種語言的摘要,先前的研究大多將該任務以兩步驟方法處
    理──「先翻譯後摘要」或「先摘要後翻譯」。但是,這兩
    種方法皆會有翻譯錯誤的問題,且其中的機器翻譯模型難以
    隨著摘要任務繼續更新微調(fine-tune)。針對上述問題,
    我們採用預訓練跨語言編碼器以向量表示(represent)不同
    語言的輸入,將其映射至相同的向量空間。預訓練方法已被
    廣泛應用在各種自然語言生成任務中,並取得優異的模型表
    現。此編碼器使得模型在學習摘要能力的過程中,同時保有
    跨語言能力。本研究中,我們實驗三種不同的微調方法,
    證明預訓練跨語言編碼器可以學習單詞階層(word-level)
    的語意特徵。在我們所有的模型組態裡,最優異的模型可
    在ROUGE-1分數上,超越基準模型3分。
    ;Cross-lingual text summarization (CLTS) is the task to generate a summary in one language given a document in a another language. Most of the previous work consider CLTS as two sub-tasks: translate-then-summarize and summarize-then-translate. Both of them are suffered from translation error and the translation system is hard to be fine-tuned with text summarization directly. To
    deal with the above problems, we utilize a pretrained cross-lingual encoder, which has been demonstrated the effectiveness in natural language generation, to represent text inputs from from different languages. We augment a standard sequence-to-sequence (Seq2Seq) network with our pretrained cross-lingual encoder so as to capture cross-lingual contextualized word representation. We show that the pretrained cross-lingual encoder can be fine-tuned on a text summarization dataset while keeping the cross-lingual ability. We experiment three different fine-tune strategies and show that the pretrained encoder can capture cross-lingual semantic features. The best of the proposed models obtains 42.08 Rouge-1 on ZH2ENSUM datasets [Zhu et al., 2019], significantly improving
    our baseline model by more than 3 Rouge-1.
    Appears in Collections:[Software Engineer] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML319View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明