English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41627736      線上人數 : 2441
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89942


    題名: A Multi-turn Dialogue Generation Framework Based on History and Context Information
    作者: 劉星瑜;Liu, Hsing-Yu
    貢獻者: 資訊管理學系
    關鍵詞: 多輪對話生成;分層循環網路;字詞級注意力機制;話語級注意力機制;上下文嵌入詞;multi-turn dialogue generation;hierarchical recurrent network;word-level attention;utterance-level attention;contextual embedding
    日期: 2022-09-27
    上傳時間: 2022-10-04 12:05:21 (UTC+8)
    出版者: 國立中央大學
    摘要: 對話機器人是一種自然語言生成的應用,它也被視為一種多輪次的對話系統,可 以維持對話超過一輪次以上。然而,使用者會期望對話機器人可以在幾輪次對話後, 依然記得之前的對話資訊,並在當前輪次的對話中適當地生成有意義、相關和一致的 話語。因此,我們提出了一種對話歷史與上下文資訊的多輪對話生成框架,旨在探討 歷史對話或上下文對話對多輪次對話的影響。
    本研究透過階層次遞迴的框架,與先前有關多輪次對話生成任務研究進行比較。 本研究透過實驗說明了不同模型的成效,而模型包含基準模型 Hierarchical Recurrent Encoder-Decoder (HRED)與 Generative Pre-training Transformer (GPT-2) 以及我們提出的 模型 Hierarchal Recurrent framework with History Dialogue (HRHD)模型和 Hierarchal Recurrent framework with Context Dialogue (HRCD) 模型,並利用自動化評估指標以 及人工評估。
    多方面的比較我們提出的模型以及基準模型的成效,結果指出 HRHD 模型表現出 較優的性能,而且它在開放域數據集和任務導向的數據集上取得較佳的效果。此外, HRCD 模型優於基準模型 HRED 並接近 GPT-2 模型的效果。通過實驗和分析,我們可 以說明歷史對話和上下文對話的資訊是可以有效改善多輪次對話生成的性能。;Conversation agents are the general application of natural language generation and are regarded as a multi-turn dialogue system that keeps conversation more than one turn. However, the user expects that the conversation agents can recover the entire dialogue history information after several turns of dialogue, and generate meaningful, relevant, and consistent responses appropriately in the current turn of dialogue. Therefore, we propose an architecture based on the history and context information that aims to learn whether history dialogue or context dialogue information has more impact on the multi-turn dialogue.
    Utilizing the hierarchal recurrent framework, we regard it as the real-world situation that happens in the multi-turn dialogue. We compare our models with previous studies on multi- turn dialogue generation tasks. Moreover, we first investigate the effectiveness of different models, including our baseline Hierarchical Recurrent Encoder-Decoder (HRED) and Generative Pre-training Transformer (GPT-2), and our proposed models: Hierarchal Recurrent framework with History Dialogue (HRHD) model and Hierarchal Recurrent framework with Context Dialogue (HRCD) model and then evaluate with the automatic evaluation metrics and the human evaluation metric.
    The HRHD model is conceptually simple and entirely shows promising performance. It obtains remarkable results in the open-domain dataset and the task-oriented dataset. Furthermore, the HRCD model outperforms the baseline HRED model and is close to the baseline GPT-2 model. Through the experiments and analysis, we knew that the information of the history dialogue and the context dialogue both improve the performance of the multi-turn dialogue generation.
    顯示於類別:[資訊管理研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML78檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明