事實一致性問題是自動萃取式摘要中關鍵且棘手的問題,近年來受 到許多研究者的關注,然而先前之研究集中於探討英文摘要中的事實 一致性問題,中文摘要的事實一致性則尚被評估與研究。 我們基於中文相對於英文較為不同的部分進行研究,也就是斷詞, 現今的中文預訓練模型大多使用和 BERT 相同的斷詞系統,實際上相 當接近單純使用字元進行斷詞。 透過使用不同中文斷詞套件來訓練中文 BART 模型,並在 LCSTS 中文摘要資料集上微調,我們證實了斷詞不只影響傳統 ROUGE 分數 也同時影響了事實一致性。 此外考慮到簡體和繁體中文的用詞差異,我們也建立了台灣新聞弱 監督自動萃取式摘要資料集 TWNSum ,透過最簡單的 LEAD 方式抽 取摘要並使用事實一致性評估篩選,表明從大量未標記的新聞語料中 生成自動萃取式摘要資料集是可行的。;Hallucination is a critical and hard problem in abstractive summarization, getting increasing attention in recent years. However, hallucination in some languages, or specifically, in Chinese, is still unexplored. We experiment with a special procedure in the Chinese modeling, which is tokenization, to figure out the effect of tokenization on hallucinations in abstractive summarization. Tokenization is not often taken out for additional experimented in English due to the language characteristics. In the Chinese scenario, current models use either the character?level tokenization or the tokenization similar to the character?level tokenization, such as the BERT tokenizer. By applying dif ferent Chinese tokenizers to the BART model, we confirm that the tokenizer will affect both the ROUGE score and the faithfulness of the model. More over, considering the difference between the traditional Chinese and simpli fied Chinese tokenizers, we create Taiwan Weakly supervised News Sum marization dataset (TWNSum) by using the simple LEAD method and the hallucination evaluation filtering. Additionally, our TWNSum dataset shows that creating an abstractive summarization dataset from a large amount of unlabeled news by a weakly supervised method is feasible.