中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86674
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41679682      線上人數 : 1534
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86674


    題名: 基於變換隱空間之風格化對話生成;Transfer Latent Spaces for Stylized Dialogue Generation
    作者: 陳威良;Chen, Wei-Liang
    貢獻者: 資訊管理學系
    關鍵詞: 對話生成;文字風格轉換;深度學習;多任務學習;Dialogue generation;text style transfer;deep neural network;multi-task learning
    日期: 2021-08-23
    上傳時間: 2021-12-07 13:06:24 (UTC+8)
    出版者: 國立中央大學
    摘要: 現今對話生成技術已經展現出強大的潛力,不過現今的對話系統通產出的回覆通
    常平淡且一般。直接讓對話系統產出附有風格的回覆,是一個讓對話系統能夠生成多
    元化回覆的解決方法。在這篇論文中我們提出附帶風格的對話生成方法,在生成對話
    的同時做風格轉換,可以達到一個問句有多種風格的回覆,目的是為了讓機器可以在
    不同對話場景用合適的風格給予回應,這項任務也可以說是有效得結合對話生成任務
    及風格轉換任務,於是我們不僅重視回覆句能夠適當回覆,更強調能夠展現出強的烈
    風格。
    由於資料特性的關係,對話資料集通常是成對的資料,一個上文有對應的下文,
    而風格文本的資料集通常是不成對的,於是我們利用監督式學習的方式建構對話模
    型,用非監督式學習的方式建構風格轉換模型,並且使兩個模型共用一個解譯器,組
    合成一個多工模型。我們提出使用輕量級的深度神經網路模型,將對話生成模型的潛
    在空間橋接到附帶風格的潛在空間,使模型插入不同風格的深度神經網路模型,就能
    生成對應風格的回覆,讓我們提出的模型生成出特別生動且令人印象深刻的回覆。文
    中我們展示成功將對話潛在空間橋接到風格轉換的潛在空間的成果,並且我們利用多
    個自動化評估指標搭配人工評估,多方面比較我們提出的模型與基準模型的成效,結
    果指出,我們的模型能夠生成附帶強烈風格的回覆,風格強度優於基準模型許多,語
    句的流暢度也優於基準模型,並且維持與基準模型相當的回覆適當性。不僅如此我們
    希望我的的模型能有廣泛的適用性,我們利用兩個外部的對話資料集,測試我們模型
    的適用性,結果顯示用於日常對話的文本有不錯的適用性。;Dialogue generation technology has shown great potential, but currently dialogue systems
    usually generate plain and general responses. Directly allowing the dialogue system to produce
    stylized responses a solution that allow the dialogue systems to generate diversified responses.
    In this study, we propose a dialogue generation method with styles. While generating dialogues,
    we can do style conversion to achieve multiple styles of responses to a question. The purpose
    is to allow the machine to respond with appropriate styles in different dialogue scenarios. This
    task can also be said to be an effective combination of dialogue generation tasks and style
    conversion tasks, so we not only attach importance to reply sentences to be able to respond
    appropriately, but also emphasize the ability to reply with high style intensity.
    With the data characteristics, the dialogue dataset is usually parallel data, one context has
    a corresponding response, and the style text dataset is usually non-parallel, so we use supervised
    learning to construct the dialogue generation model, using unsupervised learning method
    constructs a style transfer model, and makes the two models shared a decoder and combine
    them into a hybrid model. We propose using lightweight deep neural network models to bridge
    the latent spaces of dialogue response generation model and style transfer model. This structure
    allows the model to generate many different impressive style sentences. In chapter four, we
    show the results of successfully bridging the latent space of dialogue generation to the latent
    space of style transfer, and we use multiple auto evaluation metrics and human evaluation to
    compare the effectiveness of our proposed model with the benchmark model in many aspects.
    The results indicate that the style intensity and the fluency of sentences are much better than
    that of the benchmark model, and the appropriateness of the responses is maintained
    comparable to that of the benchmark model. Not only that, we use two external dialogue
    datasets to test the applicability of our model. The results show that the text used for daily
    dialogue has good applicability.
    顯示於類別:[資訊管理研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML77檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明