English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78818/78818 (100%)
造訪人次 : 34474919      線上人數 : 728
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77674


    題名: 應用遞歸神經網路於適當的時機回答問題;Answer Questions at the Right Time with Recurrent Neural Networks
    作者: 韓任倢;Han, Jen-Chieh
    貢獻者: 資訊工程學系
    關鍵詞: 回答時機;時間間隔;對話;深度學習;長短期記憶模型;注意力機制;詞向量;命名實體;Answer Time;Time Interval;Dialogue;Deep Learning;Long-Short Term Memory;Attention Mechanism;Word Embedding;Named Entity
    日期: 2018-07-27
    上傳時間: 2018-08-31 14:52:23 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年在中文自然語言領域技術成熟後,開始進一地往生活的應用上深掘,而對話系統也在這個時候逐漸蓬勃發展。從使用者輸入的句子中將字一一拆解,釐清是否與該應用領域的知識有關聯,辨識的同時還要讓系統能夠記得內容,最後串聯彼此間的連結,從而理解到使用者想問的問題,使系統回覆使用者適當的答案。而這一切的研究都有很大程度的進展,但在大部分的對話系統,都會預設使用者每輸入一次、系統則回覆一次,也就是one turn的形式。然而在實務上,仍是有使用者在描述問題會以多次地輸入句子,而同時客服人員在等待的狀況,思考如何解決這個問題,則是本次研究的目標。
    我們使用現實中的中文對話資料,除了文字以外加入命名實體標籤、句與句之間的時間間隔等特徵,來學習深度學習的模型,找出適當的特徵。並採用具有記憶性且在有序列關聯的資料中,發揮優良的長短期記憶模型來尋求問題的出口,接著適時地加入注意力機制來提升效率。而且採用預先訓練詞向量,減少未知詞雜訊所造成的負面影響。最終,本論文比較多種特徵並引用注意力機制來學型模型,並引用注意力機制來點綴,相較於常見的單句輸入就回答的方式,在中文電信領域的對話資料集中平均大約提升了2%。

    關鍵詞:回答時機; 時間間隔; 對話; 深度學習; 長短期記憶模型; 注意力機制; 詞向量; 命名實體;In recent years, the technique of natural language processing is growing mature and expand to the application on real life. Then dialogue system also establish a strong position. You need to segment each word of sentence and figure out if relate to the domain. When recognizing, the system must remember the content. And finally, connect all the relation of words and realize the real question of user’s input. Then make system answer the question at the right time. Everything is fine, but it still has some problem. In most of dialogue system, after user inputting the question, the system will answer right away. In implement, it exists some situation like user will input many times to describe their problem. And customer service staff is just waiting. So it is what the problem, our research need to deal with.
    We mainly use the dialogue data with words, named entity tags, time interval and so on. Using these features to train the deep learning model and find the appropriate ones. Try LSTM, including the gate to remember the content and being good at coping with sequence, and attention mechanism to get the better result. Then we use the pre-training word embedding to reduce the noisy in data. In the end, out thesis compares many kinds of features, and uses attention mechanism to train the model. Then determine which time of user’s input is fine to answer the question. In the telecom domain dialogue dataset, the model is about 2% higher than the one-input and one-answer situation.

    Keywords: Answer Time; Time Interval; Dialogue; Deep Learning; Long-Short Term Memory; Attention Mechanism; Word Embedding; Named Entity
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML117檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明