中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/81318
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41773858      Online Users : 2100
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/81318


    Title: 基於BERT之K12數位學習聊天機器人中文問答模塊;BERT-based Chinese Question Answering Module for K12 E-Learning Chatbot
    Authors: 楊崴;Yang, Wei
    Contributors: 資訊工程學系
    Keywords: 數位學習;問答系統;機器閱讀;自然語言處理;E-Learning;Question Answering;Machine Reading Comprehension;Nature Language Processing
    Date: 2019-08-19
    Issue Date: 2019-09-03 15:44:01 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 近年來,數位學習由於其無時間地點限制,可隨時學習的特點逐漸普及且日趨流行。在數位學習中導入聊天機器人通常可作為虛擬助教,隨時為學生提供幫助,快速解答疑難問題。不同於常見的規則式、檢索式模型,導入開放領域問答模塊的聊天機器人,能夠實現使用非結構化文本作為知識源,對任何事實問題進行作答的功能。但是,目前大多數問答系統為針對英文問答設計,應用於中文問答情境時,由於中文語言的一些結構特點,效能有明顯的下降。
    本篇論文提出了一個使用BERT Encoder,利用中學課文作為知識來源,能夠根據問題檢索相關課文文本並從中抽取答案的中文問答模塊。並且,我們針對模塊所使用的BERT Encoder做出改進,通過修改預訓練階段的詞遮蔽任務,為下游任務中的答案預測提升效能,同時通過對問題進行多段落的檢索與答案預測以及資料增強的訓練方式提高問答模塊整體效能。對多個資料集的實驗結果表明,改進後的問答模塊具有較強的效能。;In recent years, E-Learning are becoming more common and popular. Chatbots in E-Learning field can be used as a virtual teaching assistant to answer questions anytime. By importing open-domain question answering module, chatbots can answer any factoid question using unstructured text as a knowledge source. However, most of the question answering model are designed for English. When applying them to Chinese, the performance is significantly reduced, due to some structural features of Chinese.
    In this paper, we propose a BERT based Chinese question answering module for chatbot which can retrieve question relevant texts and extract answers by using middle school textbook as a knowledge source. Meanwhile, we modified BERT Encoder word mask task of pre-training to improve the answer extraction performance. And we improve the overall performance by multi-paragraph retrieval and answer extraction and the data augmentation fine-tuning method. Experimental results on multiple datasets show that our question answering module has strong performance.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML260View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明