為建立Retrieval-based聊天機器人,我們從聊天紀錄中來產生訓練所需的問答配對(Question-Answer Pair),然而問答配對並非完全依序地呈現在聊天紀錄中,不同內容的問答配對可能互相穿插,而從互相穿插的訊息中分離出內容不同的會話的任務即為對話解構(conversation disentanglement)。 現有的對話解構研究大多透過計算兩個訊息的相似度來解決問題,在此論文中,我們得出透過計算訊息相似度判斷訊息是否屬於相同會話是非常困難的,但若我們透過計算相似度來預測訊息的回覆關係則可以解決此問題。此外我們指出過去研究中的模型無法處理未經訓練的訊息,而無法在實務上運用的缺陷。 此論文中,我們使用IRC與Reddit資料集進行實驗,並使用QNAP聊天記錄進行對話解構。其中人工合成的Reddit資料集提供額外的大量訓練資料,且BERT模型在此資料集上的回覆關係預測獲得良好的效能。;In order to build a retrieval-based chatbot, we generate the Question-Answer Pairs from the chat log. However, Question-Answer Pairs don’t present in order in the chat log. Question-Answer Pairs of different content may interleave with each other. The task of separating mixed messages into detached conversation are called conversation disentanglement. Most of the existing research deal with this task by calculating the similarity of two messages. In this paper, we find that it is very difficult to predict whether two messages belong to the same conversation by calculating the similarity of the message, but if we predict the reply relation of the message by calculating the similarity, this problem can be solved. In addition, we point out that the models in the past research are unable to deal with untrained messages, and cannot be used in real world. In this paper, we used IRC and Reddit datasets for experiments and QNAP chat log for conversation disentanglement. The synthetic Reddit dataset provides an additional amount of training data, and the BERT model gets good performance on predicting reply relationship on this dataset.