摘要: | 開放式資訊擷取對於資訊理解甚是重要,此任務可以有效將複雜語句的文本簡單化, 利用大量三元組形式來表達文本的重點信息,而所謂的三元組即是 (實體 1,關係,實 體 2) ,利用這個形式來更加淺顯易懂的表達語句中潛在的關係。因此我們提出一個管 道式模型 CHERE (Chinese Healthcare Entity-Relationship Extraction),此模型將實體擷取 模型和關係擷取模型分開訓練,再結合在一起使用。整體模型為利用 ME-MGNN 擷取 出實體對,並採用基於轉換器 RoBERTa 自然語言預訓練模型的關係模型,從文本中擷 取該實體對淺在的關係,進而形成三元組。由於現階段中文領域的開放式資訊擷取資料 集相當缺乏,更遑論健康照護領域,因此我們人工標記一組語料庫,資料來源主要來自 維基百科,總共有 5,879 個句子,包含 22,944 個實體以及 8,879 個三元組。其中此語料 庫每句話都至少具有兩個以上的實體標記,部分資料具有三元組。最後透過實驗,我們 得到 CHERE 在不同 F1-Score 評估指標下得到 0.6966 (Exact Match) 、0.7795 (Contain Match) 和 0.7986 (Token-Level Match) 等效能,其分數皆優異於其他開放式擷取模型 SpanOIE、Mulit2OIE、CHOIE。;Open Information Extraction (OpenIE) is used to effectively simplify complex sentences in the form of triples such as (entity 1, relation, relation 2), which is an important technique to extract potential knowledge embedded in the sentences for natural language understanding. In this study, we propose a pipelined architecture, called CHERE (Chinese Healthcare Entity- Relationship Extraction), including an entity recognition model and a relationship extraction model. The ME-MGNN neural network is used to recognize entities, and their combined entity pairs are fed into the RoBERTa transformer for potential relation extraction. We manually annotate collected datasets for Chinese healthcare entity-relationship extraction because of a lack of publicly available benchmark data. The collected data mainly came from Wikipedia. In summary, our constructed data includes 5,879 sentences with a total of 22,944 entities and 8,879 triples. Each sentence contains at least two entities, where some entities contains the entity- relationship triples. Experimental results show our proposed CHERE model achieved the exact match F1 of 0.6966, contain match F1 of 0.7795, and token-level match F1 of 0.7986, which outperforms than compared models including SpanOIE, Multi2OIE, and CHOIE. |