中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/89942
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78937/78937 (100%)
Visitors : 39423383      Online Users : 272
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/89942


    Title: A Multi-turn Dialogue Generation Framework Based on History and Context Information
    Authors: 劉星瑜;Liu, Hsing-Yu
    Contributors: 資訊管理學系
    Keywords: 多輪對話生成;分層循環網路;字詞級注意力機制;話語級注意力機制;上下文嵌入詞;multi-turn dialogue generation;hierarchical recurrent network;word-level attention;utterance-level attention;contextual embedding
    Date: 2022-09-27
    Issue Date: 2022-10-04 12:05:21 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 對話機器人是一種自然語言生成的應用,它也被視為一種多輪次的對話系統,可 以維持對話超過一輪次以上。然而,使用者會期望對話機器人可以在幾輪次對話後, 依然記得之前的對話資訊,並在當前輪次的對話中適當地生成有意義、相關和一致的 話語。因此,我們提出了一種對話歷史與上下文資訊的多輪對話生成框架,旨在探討 歷史對話或上下文對話對多輪次對話的影響。
    本研究透過階層次遞迴的框架,與先前有關多輪次對話生成任務研究進行比較。 本研究透過實驗說明了不同模型的成效,而模型包含基準模型 Hierarchical Recurrent Encoder-Decoder (HRED)與 Generative Pre-training Transformer (GPT-2) 以及我們提出的 模型 Hierarchal Recurrent framework with History Dialogue (HRHD)模型和 Hierarchal Recurrent framework with Context Dialogue (HRCD) 模型,並利用自動化評估指標以 及人工評估。
    多方面的比較我們提出的模型以及基準模型的成效,結果指出 HRHD 模型表現出 較優的性能,而且它在開放域數據集和任務導向的數據集上取得較佳的效果。此外, HRCD 模型優於基準模型 HRED 並接近 GPT-2 模型的效果。通過實驗和分析,我們可 以說明歷史對話和上下文對話的資訊是可以有效改善多輪次對話生成的性能。;Conversation agents are the general application of natural language generation and are regarded as a multi-turn dialogue system that keeps conversation more than one turn. However, the user expects that the conversation agents can recover the entire dialogue history information after several turns of dialogue, and generate meaningful, relevant, and consistent responses appropriately in the current turn of dialogue. Therefore, we propose an architecture based on the history and context information that aims to learn whether history dialogue or context dialogue information has more impact on the multi-turn dialogue.
    Utilizing the hierarchal recurrent framework, we regard it as the real-world situation that happens in the multi-turn dialogue. We compare our models with previous studies on multi- turn dialogue generation tasks. Moreover, we first investigate the effectiveness of different models, including our baseline Hierarchical Recurrent Encoder-Decoder (HRED) and Generative Pre-training Transformer (GPT-2), and our proposed models: Hierarchal Recurrent framework with History Dialogue (HRHD) model and Hierarchal Recurrent framework with Context Dialogue (HRCD) model and then evaluate with the automatic evaluation metrics and the human evaluation metric.
    The HRHD model is conceptually simple and entirely shows promising performance. It obtains remarkable results in the open-domain dataset and the task-oriented dataset. Furthermore, the HRCD model outperforms the baseline HRED model and is close to the baseline GPT-2 model. Through the experiments and analysis, we knew that the information of the history dialogue and the context dialogue both improve the performance of the multi-turn dialogue generation.
    Appears in Collections:[Graduate Institute of Information Management] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML132View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明