博碩士論文 105522064 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator韓任倢zh_TW
DC.creatorJen-Chieh Hanen_US
dc.date.accessioned2018-7-27T07:39:07Z
dc.date.available2018-7-27T07:39:07Z
dc.date.issued2018
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=105522064
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract近年在中文自然語言領域技術成熟後,開始進一地往生活的應用上深掘,而對話系統也在這個時候逐漸蓬勃發展。從使用者輸入的句子中將字一一拆解,釐清是否與該應用領域的知識有關聯,辨識的同時還要讓系統能夠記得內容,最後串聯彼此間的連結,從而理解到使用者想問的問題,使系統回覆使用者適當的答案。而這一切的研究都有很大程度的進展,但在大部分的對話系統,都會預設使用者每輸入一次、系統則回覆一次,也就是one turn的形式。然而在實務上,仍是有使用者在描述問題會以多次地輸入句子,而同時客服人員在等待的狀況,思考如何解決這個問題,則是本次研究的目標。 我們使用現實中的中文對話資料,除了文字以外加入命名實體標籤、句與句之間的時間間隔等特徵,來學習深度學習的模型,找出適當的特徵。並採用具有記憶性且在有序列關聯的資料中,發揮優良的長短期記憶模型來尋求問題的出口,接著適時地加入注意力機制來提升效率。而且採用預先訓練詞向量,減少未知詞雜訊所造成的負面影響。最終,本論文比較多種特徵並引用注意力機制來學型模型,並引用注意力機制來點綴,相較於常見的單句輸入就回答的方式,在中文電信領域的對話資料集中平均大約提升了2%。 關鍵詞:回答時機; 時間間隔; 對話; 深度學習; 長短期記憶模型; 注意力機制; 詞向量; 命名實體zh_TW
dc.description.abstractIn recent years, the technique of natural language processing is growing mature and expand to the application on real life. Then dialogue system also establish a strong position. You need to segment each word of sentence and figure out if relate to the domain. When recognizing, the system must remember the content. And finally, connect all the relation of words and realize the real question of user’s input. Then make system answer the question at the right time. Everything is fine, but it still has some problem. In most of dialogue system, after user inputting the question, the system will answer right away. In implement, it exists some situation like user will input many times to describe their problem. And customer service staff is just waiting. So it is what the problem, our research need to deal with. We mainly use the dialogue data with words, named entity tags, time interval and so on. Using these features to train the deep learning model and find the appropriate ones. Try LSTM, including the gate to remember the content and being good at coping with sequence, and attention mechanism to get the better result. Then we use the pre-training word embedding to reduce the noisy in data. In the end, out thesis compares many kinds of features, and uses attention mechanism to train the model. Then determine which time of user’s input is fine to answer the question. In the telecom domain dialogue dataset, the model is about 2% higher than the one-input and one-answer situation. Keywords: Answer Time; Time Interval; Dialogue; Deep Learning; Long-Short Term Memory; Attention Mechanism; Word Embedding; Named Entityen_US
DC.subject回答時機zh_TW
DC.subject時間間隔zh_TW
DC.subject對話zh_TW
DC.subject深度學習zh_TW
DC.subject長短期記憶模型zh_TW
DC.subject注意力機制zh_TW
DC.subject詞向量zh_TW
DC.subject命名實體zh_TW
DC.subjectAnswer Timeen_US
DC.subjectTime Intervalen_US
DC.subjectDialogueen_US
DC.subjectDeep Learningen_US
DC.subjectLong-Short Term Memoryen_US
DC.subjectAttention Mechanismen_US
DC.subjectWord Embeddingen_US
DC.subjectNamed Entityen_US
DC.title應用遞歸神經網路於適當的時機回答問題zh_TW
dc.language.isozh-TWzh-TW
DC.titleAnswer Questions at the Right Time with Recurrent Neural Networksen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明