在社群媒體快速發展的今日,未經驗證的消息流竄於社群媒體,使得人們長期暴露在恐慌的氣氛中。謠言偵測任務從早期的機器學習分類器到近年常見的深度學習方法中辨識準確率持續增高。本研究運用Google 開源語言模型 – BERT,透過遷移學習方式微調下游雙向LSTM模型,成為BERT-BiLSTM模型。該模型將謠言分類為謠言以及非謠言二類。透過實驗證實使用BERT進行微調得以取得較好的成效,超越基準方法。本研究從政府與民間的公開資料庫中收集1991筆食安謠言並整理為資料集,BERT-BiLSTM 模型在食安謠言資料集中測試準確率達到86.18%、F-score達到80%、在Pheme謠言測試資料集中,測試準確率達到85%、F-score達到89%,證實本研究提出之闢謠模型具有效性,且F-score、Recall 評估指標優於近年自動化檢測方法。此外,本研究透過闢謠機器人的呈現方式,與現有闢謠機器人相比,同時解決關鍵字問題以及回覆時間的問題。;In a time of rapid social media growth, unverified information circulates on social media platforms can expose communities to a prolonged atmosphere of panic. From early machine learning classifiers to the more modern deep learning methods that gained popularity in recent years, the mission to detect rumors keeps on increasing in recognition accuracy. Our study uses Google’s open source language model, BERT, with some amount of tweaking using transfer learning to the downstream bidirectional LSTM, resulting in a BERT-BiLSTM model. The model classifies information into two categories, rumors and non-rumors. Our experiments show that a slightly tweaked BERT model produces results that surpass the benchmark method. A dataset is generated by compiling 1991 food safety rumors gathered from public databases of governments and civil organizations. The BERT-BiLSTM model has a test accuracy of 86.18% and an F-score of 80% with the food safety rumor dataset, and a test accuracy of 85% and an F-score of 89% with the Pheme rumor dataset. The high numbers reflect the efficacy of the model, and the F-score and Recall metrics are better than the automated detection methods used in recent years. Additionally, our study presents the results using an improved rumor-dispelling robot, solving both keyword and response time problems compared with currently available rumor-dispelling robots.