隨著電商平台的興起,消費者的購物習慣逐漸轉向線上,且網路評論對消費者購買決策產生顯著影響。然而,虛假評論的泛濫使消費者難以辨別真實資訊,且研究顯示,人類判斷虛假評論準確率僅57.3%。近年來,BERT等自然語言處理(Natural Language Processing)技術已被廣泛應用於真假評論檢測,但在跨領域應用時,因不同領域間的用詞存在顯著差異,導致模型準確率下降的問題。 為解決此問題,本研究提出一種結合Word2Vec與BERT的方法,透過建構領域相似詞字典並對文本進行遮罩處理,以減少領域特定詞彙對分類結果的影響。其中,Word2Vec負責建構相似詞字典,而BERT 則用於語意理解與特徵提取。本研究將在不同領域(如旅館、餐廳、醫療、電子產品)進行測試,以驗證模型在同領域(In-domain)與跨領域(Cross-domain)場景中的效能,期望提升分類器在不同領域間的泛化能力。;With the rise of e-commerce platforms, consumers′ shopping habits have gradually shifted online, and online reviews have had a significant impact on their purchasing decisions. However, the prevalence of deceptive reviews makes it difficult for consumers to distinguish genuine information. Research also indicates that the accuracy of human judgment in identifying fake reviews is only 57.3%. In recent years, Natural Language Processing (NLP) techniques, such as BERT, have been widely applied to deceptive review detection. However, in cross-domain applications, the accuracy of these models declines due to significant differences in vocabulary across different domains. To address this issue, this study proposes a method that combines Word2Vec and BERT, utilizing a domain-similar word dictionary and a masking mechanism to reduce the influence of domain-specific vocabulary on classification results. Specifically, Word2Vec is responsible for constructing the similar word dictionary, while BERT is used for semantic understanding and feature extraction. This study will evaluate the model′s performance in different domains (such as hotel, restaurant, doctor, and electronic) through in-domain and cross-domain testing. The goal is to enhance the generalization ability of the classifier across various domains.