博碩士論文 103522603 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator施庫瑪zh_TW
DC.creatorSipun Kumar Pradhanen_US
dc.date.accessioned2016-8-18T07:39:07Z
dc.date.available2016-8-18T07:39:07Z
dc.date.issued2016
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=103522603
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract摘要   開放領域問答系統(QA)系統,旨在沒有領域限制的範圍下,提供由自然語言提出問題中的確切答案。本研究的目標在於開發一個學習模式,可以自動產生新的特徵,而無需重新訓練,本系統特別在於解決結構和意義的多重開域QA任務。該框架的主要優點是,它只需要的少量的特徵選取工程以及領域,並能達到同時匹配或超過目前的先進成果。此外,它可以很容易地被訓練成符合任何一種開放域QA的使用。   我們研究了一類新的學習模式稱為記憶神經網絡。記憶神經網絡的原因與推理具有長期記憶組件的結合,多個記憶神經網絡能學會共同使用彼此資源。長期存儲器可被讀取和寫入,我們使用它進行預測的目標,本論文應用在問題的應答(QA),其中長期記憶成為整個動態資料庫的基礎,並且以文字的形式輸出結果。我們提出基於端至端神經網絡的系統,可以達到我們所希望的目標,並學習執行特殊的操作。在本論文的最後,本系統將與多種不同的基準數據集做比較,並討論未來的工作。zh_TW
dc.description.abstractOpen-domain Question Answering (QA) systems aim at providing the exact answer(s) to questions formulated in natural language, without restriction of domain. My research goal in this thesis is to develop learning models that can automatically induce new facts without having to be re-trained, in particular its structure and meaning in order to solve multiple Open-domain QA tasks. The main advantage of this framework is that it requires little feature engineering and domain specificity whilst matching or surpassing state-of-the-art results. Furthermore, it can easily be trained to be used with any kind of Open-domain QA. I investigate a new class of learning models called memory neural networks. Memory neural networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. I investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. Finally, I show that an end-to-end dialog system based on memory neural networks can reach promising and learn to perform non-trivial operations. I confirm those results by comparing my system to various well-crafted baseline Datasets and future work is discussed.en_US
DC.subject問題問答zh_TW
DC.subject記憶神經網路zh_TW
DC.subject長期記憶元件zh_TW
DC.subjectQuestion Answeringen_US
DC.subjectMemory neural networksen_US
DC.subjectLong-term memory componenten_US
DC.titleA Rapid Deep Learning Model for Goal-Oriented Dialogen_US
dc.language.isoen_USen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明