開放領域問答系統(QA)系統,旨在沒有領域限制的範圍下,提供由自然語言提出問題中的確切答案。本研究的目標在於開發一個學習模式,可以自動產生新的特徵,而無需重新訓練,本系統特別在於解決結構和意義的多重開域QA任務。該框架的主要優點是,它只需要的少量的特徵選取工程以及領域,並能達到同時匹配或超過目前的先進成果。此外,它可以很容易地被訓練成符合任何一種開放域QA的使用。 我們研究了一類新的學習模式稱為記憶神經網絡。記憶神經網絡的原因與推理具有長期記憶組件的結合,多個記憶神經網絡能學會共同使用彼此資源。長期存儲器可被讀取和寫入,我們使用它進行預測的目標,本論文應用在問題的應答(QA),其中長期記憶成為整個動態資料庫的基礎,並且以文字的形式輸出結果。我們提出基於端至端神經網絡的系統,可以達到我們所希望的目標,並學習執行特殊的操作。在本論文的最後,本系統將與多種不同的基準數據集做比較,並討論未來的工作。 ;Open-domain Question Answering (QA) systems aim at providing the exact answer(s) to questions formulated in natural language, without restriction of domain. My research goal in this thesis is to develop learning models that can automatically induce new facts without having to be re-trained, in particular its structure and meaning in order to solve multiple Open-domain QA tasks. The main advantage of this framework is that it requires little feature engineering and domain specificity whilst matching or surpassing state-of-the-art results. Furthermore, it can easily be trained to be used with any kind of Open-domain QA.
I investigate a new class of learning models called memory neural networks. Memory neural networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. I investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. Finally, I show that an end-to-end dialog system based on memory neural networks can reach promising and learn to perform non-trivial operations. I confirm those results by comparing my system to various well-crafted baseline Datasets and future work is discussed.