在今天競爭激烈的商業環境中,組織可以從通過文本分類進行主題分析中獲益良多。雖然有多種方法可供選擇,但BERT是自然語言處理中最有效的技術之一。BERT通常被用作特定領域的分類模型,但因模型通常沒有超出其訓練數據的知識,例如像人類一般的對事情之常識及事物之間的關聯性認知,因此限制了它與人類智能的相似度。 為了解決這個限制,本研究討探了把BERT與另一個有價值的工具—知識圖譜相結合,以擴展分類模型的能力。通過融入知識圖谱,BERT模型可以像人類一樣獲得一般知識,提升其分類能力。BERT和知識圖谱的結合有潛力顯著提升組織從大量文本數據中提取有價值的洞察力的能力。經過實驗測試,本研究發現BERT模型在加入了不同種類的知識圖譜後,對於不同的分類任務帶來的成效不一。另外,本研究亦發現加入知識圖譜的BERT模型會面臨著不同的挑戰:如訓練模型的複雜度提高、長短文本應用上的挑戰、及確保句子與知識表示模型—知識三元組之關聯性。 ;In today′s highly competitive business environment, organizations can benefit greatly from subject analysis through text classification. While there are several methods available, BERT is one of the most effective techniques for natural language processing. However, BERT is typically used as a domain-specific classification model and may not possess knowledge beyond its training data, limiting its similarity to human intelligence. To address this limitation, this research is exploring the combination of BERT with another valuable instrument, the knowledge graph. By incorporating the knowledge graph, the BERT model can acquire general knowledge as humans do, enhancing its classification capabilities. The study found that the BERT model has different performances on classification tasks after adding various types of knowledge graphs. In addition, the study also found that the model will face different challenges: such as the increase in the complexity of the training model, challenges in the application of long and short texts, and ensuring the relevance of sentences and the knowledge representation models—knowledge triples.