中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93106
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41268744      Online Users : 57
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/93106


    Title: 針對特定領域任務—基於常識的BERT模型之應用;The Application of Common-Sense Knowledge-based BERT on Domain-Specific Tasks
    Authors: 葉詠心;Sam, Ip Weng
    Contributors: 資訊管理學系
    Keywords: 文本分類;知識圖譜;情感分析;主題分類;text classification;knowledge graph;sentiment analysis;topic categorization
    Date: 2023-06-30
    Issue Date: 2024-09-19 16:42:21 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 在今天競爭激烈的商業環境中,組織可以從通過文本分類進行主題分析中獲益良多。雖然有多種方法可供選擇,但BERT是自然語言處理中最有效的技術之一。BERT通常被用作特定領域的分類模型,但因模型通常沒有超出其訓練數據的知識,例如像人類一般的對事情之常識及事物之間的關聯性認知,因此限制了它與人類智能的相似度。
    為了解決這個限制,本研究討探了把BERT與另一個有價值的工具—知識圖譜相結合,以擴展分類模型的能力。通過融入知識圖谱,BERT模型可以像人類一樣獲得一般知識,提升其分類能力。BERT和知識圖谱的結合有潛力顯著提升組織從大量文本數據中提取有價值的洞察力的能力。經過實驗測試,本研究發現BERT模型在加入了不同種類的知識圖譜後,對於不同的分類任務帶來的成效不一。另外,本研究亦發現加入知識圖譜的BERT模型會面臨著不同的挑戰:如訓練模型的複雜度提高、長短文本應用上的挑戰、及確保句子與知識表示模型—知識三元組之關聯性。
    ;In today′s highly competitive business environment, organizations can benefit greatly from subject analysis through text classification. While there are several methods available, BERT is one of the most effective techniques for natural language processing. However, BERT is typically used as a domain-specific classification model and may not possess knowledge beyond its training data, limiting its similarity to human intelligence.
    To address this limitation, this research is exploring the combination of BERT with another valuable instrument, the knowledge graph. By incorporating the knowledge graph, the BERT model can acquire general knowledge as humans do, enhancing its classification capabilities.
    The study found that the BERT model has different performances on classification tasks after adding various types of knowledge graphs. In addition, the study also found that the model will face different challenges: such as the increase in the complexity of the training model, challenges in the application of long and short texts, and ensuring the relevance of sentences and the knowledge representation models—knowledge triples.
    Appears in Collections:[Graduate Institute of Information Management] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML10View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明