近年來,數位學習由於其無時間地點限制,可隨時學習的特點逐漸普及且日趨流行。在數位學習中導入聊天機器人通常可作為虛擬助教,隨時為學生提供幫助,快速解答疑難問題。不同於常見的規則式、檢索式模型,導入開放領域問答模塊的聊天機器人,能夠實現使用非結構化文本作為知識源,對任何事實問題進行作答的功能。但是,目前大多數問答系統為針對英文問答設計,應用於中文問答情境時,由於中文語言的一些結構特點,效能有明顯的下降。 本篇論文提出了一個使用BERT Encoder,利用中學課文作為知識來源,能夠根據問題檢索相關課文文本並從中抽取答案的中文問答模塊。並且,我們針對模塊所使用的BERT Encoder做出改進,通過修改預訓練階段的詞遮蔽任務,為下游任務中的答案預測提升效能,同時通過對問題進行多段落的檢索與答案預測以及資料增強的訓練方式提高問答模塊整體效能。對多個資料集的實驗結果表明,改進後的問答模塊具有較強的效能。;In recent years, E-Learning are becoming more common and popular. Chatbots in E-Learning field can be used as a virtual teaching assistant to answer questions anytime. By importing open-domain question answering module, chatbots can answer any factoid question using unstructured text as a knowledge source. However, most of the question answering model are designed for English. When applying them to Chinese, the performance is significantly reduced, due to some structural features of Chinese. In this paper, we propose a BERT based Chinese question answering module for chatbot which can retrieve question relevant texts and extract answers by using middle school textbook as a knowledge source. Meanwhile, we modified BERT Encoder word mask task of pre-training to improve the answer extraction performance. And we improve the overall performance by multi-paragraph retrieval and answer extraction and the data augmentation fine-tuning method. Experimental results on multiple datasets show that our question answering module has strong performance.