![]() |
以作者查詢圖書館館藏 、以作者查詢臺灣博碩士 、以作者查詢全國書目 、勘誤回報 、線上人數:35 、訪客IP:18.97.14.88
姓名 李岳峻(Yue-Jun Li) 查詢紙本館藏 畢業系所 人工智慧國際碩士學位學程 論文名稱 大型語言模型增強人機協作知識圖譜建構
(LLM-Enhanced Human-AI Collaborative Knowledge Graph Construction)相關論文 檔案 [Endnote RIS 格式]
[Bibtex 格式]
[相關文章]
[文章引用]
[完整記錄]
[館藏目錄]
至系統瀏覽論文 (2030-7-15以後開放)
摘要(中) 隨著人工智慧發展, KG 已成為結構化知識管理的關鍵技術,然其建構過程耗時費力,且LLM雖能提升效率,卻面臨生成內容不穩定與產生幻覺等挑戰。現有自動化工具多缺乏互動性與即時修正能力,對非技術背景使用者門檻較高,為解決此問題,本研究設計並實現了一個名為「HCIKG」的大型語言模型增強人機協作知識圖譜建構系統。HCIKG 透過整合語音辨識、多輪對話引導,以及創新的 RAG 模組,將使用者的自然語言指令精準轉換為資料庫查詢語法,為驗證系統成效,本研究透過四階段實驗,證實HCIKG在系統易用性與人機協作效能上,均顯著優於傳統工具,其核心的RAG提示策略,亦在準確率與運算成本間取得最佳平衡,最終,產出的高品質知識圖譜更能成功應用於自動化考題生成等下游任務,完整展現了本框架的有效性。 摘要(英) The construction of Knowledge Graphs (KGs), a key technology in structured knowledge management, presents a significant challenge due to its time-consuming and labor-intensive nature. While Large Language Models (LLMs) can enhance efficiency, they grapple with issues of instability and hallucination in content generation. Existing automated tools often lack interactivity and real-time correction capabilities, posing a high technical barrier for non-technical users. To address these issues, this study designs and implements an LLM-Enhanced Human-AI Collaborative Knowledge Graph Construction system, named "HCIKG". HCIKG integrates speech recognition, multi-turn dialogue guidance, and an innovative Retrieval-Augmented Generation (RAG) module to precisely convert users′ natural language commands into database query syntax.
A four-stage experiment was conducted to validate the system′s efficacy. The results indicate that HCIKG significantly outperforms traditional tools in system usability and collaborative performance. Furthermore, its core RAG-based strategy strikes an optimal balance between accuracy and computational cost. The practical utility of the framework is demonstrated by the successful application of the resulting high-quality knowledge graph in downstream tasks, such as automated exam question generation, thus confirming the framework′s overall effectiveness.
關鍵字(中) ★ 大型語言模型
★ 人機協作
★ 知識圖譜關鍵字(英) ★ RAG 論文目次 摘要 I
Abstract II
Contents III
List of Figures VIII
List of Tables IX
Chapter 1. Introduction 1
1.1 Research Background 1
1.2 Research Objectives 3
1.3 Thesis Structure 3
Chapter 2. Literature Review 5
2.1 The Relationship Between Ontology and Knowledge Graph 5
2.1.1 Ontology 5
2.1.2 Knowledge Graph 6
2.2 The Evolution of Knowledge Graph Construction Technologies 7
2.2.1 Traditional Construction Methods 7
2.2.1.1 Manual Annotation 7
2.2.1.2 Methods Based on Semantic Web Technologies: RDF and SPARQL 7
2.2.2 Construction Methods Based on Large Language Models 9
2.2.2.1 The Transformer Architecture 9
2.2.2.2 Applications and Challenges of LLMs in Knowledge Graph Construction 11
2.3 Key Technologies for Enhancing LLM Reliability 12
2.3.1 Retrieval-Augmented Generation (RAG) 12
2.3.2 Graph-Augmented Retrieval Generation (GraphRAG) 13
2.4 Current State and Limitations of Automated Knowledge Graph Construction Tools 14
2.5 Chapter Summary 14
Chapter 3. System Design of a HCIKG 16
3.1 System Architecture and Design Philosophy 16
3.1.1 Extension of Traditional Dialogue Systems 16
3.1.2 Core Architecture of HCIKG 17
3.2 HCIKG System Architecture 19
3.2.1 Speech Processing Module (A1) 21
3.2.2 Dialogue Management Module (A2) 22
3.2.3 Cypher Syntax Generation Module (A3) 22
3.2.4 Knowledge Graph Construction Module (A4) 23
3.3 Discrete Event Modeling of the HCIKG System 24
3.3.1 Discrete Event Model of the Speech Processing Module 27
3.3.2 Discrete Event Model of the Dialogue Management Module 27
3.3.3 Discrete Event Model of the Cypher Syntax Generation Module 29
3.3.4 Discrete Event Model of the Knowledge Graph Construction Module 30
3.4 HCIKG User Interface Design 31
Chapter 4. System Experiments 32
4.1 Experimental Design 32
4.1.1 Experimental Environment 32
4.1.2 Experiment Description 33
4.1.3 Datasets 34
4.2 High-Level Software Synthesis of the HCIKG System 35
4.2.1 User Interface 36
4.2.2 HCIKG System Operation 37
4.3 Experiment 1: Comparison of Knowledge Graph Construction Tools 42
4.3.1 Experimental Method 42
4.3.2 Evaluation Metrics 43
4.3.3 Experimental Results 46
4.4 Experiment 2: Accuracy Comparison of Different Prompting Strategies for Cypher Generation 47
4.4.1 Experimental Method 48
4.4.1.1 Test Dataset 48
4.4.1.2 Prompting Strategies 49
4.4.1.3 RAG Template Corpus 49
4.4.2 Evaluation Metrics 50
4.4.3 Experimental Results 51
4.5 Experiment 3: Evaluation of Human-Computer Collaboration Performance 52
4.5.1 Experimental Method 53
4.5.2 Evaluation Metrics 54
4.5.3 Experimental Results 55
4.6 Experiment 4: Application in Knowledge Graph-Driven Exam Question Generation 57
4.6.1 Question Generation Method 58
4.6.1.1 Knowledge Graph Structure Design 58
4.6.1.2 Question Type Generation Principles 59
4.6.1.3 Question Item Difficulty Design 59
4.6.1.4 Generation Quality Validation 61
4.6.2 Generated Question Examples 61
4.6.3 Experimental Results 65
Chapter 5. Conclusion and Future Work 67
5.1 Conclusion 67
5.2 Future Work 68
Chapter 6. References 69
Appendix A、Test Sentence Dataset 71
Appendix B、Manually Constructed KG Triples 75
Appendix C、The 20 Test Items 78
Appendix D、Test Item Difficulty Design 80參考文獻 [1] J. Baek, A. F. Aji, and A. Saffari, "A comprehensive survey on automatic knowledge graph construction," arXiv preprint arXiv:2302.05019, 2023.
[2] Y. Zhang, C. Wang, C. Wang, C. Ji, and F. Tsung, "AutoKG: Efficient automated knowledge graph generation for language models," arXiv preprint arXiv:2311.14740, 2023.
[3] S. Ji, S. Pan, E. Cambria, et al., "A survey on knowledge graphs: Representation, acquisition, and applications," IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 2, pp. 494–514, 2021.
[4] A. Hogan, E. Blomqvist, M. Cochez, et al., "Knowledge graphs," ACM Computing Surveys, vol. 54, no. 4, Art. 71, 2021.
[5] C. Peng, F. Xia, M. Naseriparsa, et al., "Knowledge graphs: Opportunities and challenges," Artificial Intelligence Review, vol. 56, pp. 13071–13101, 2023.
[6] M. Chen, Z. Wang, Y. Han, et al., "Knowledge graph embedding for link prediction: A comparative analysis," ACM Transactions on Knowledge Discovery from Data, vol. 16, no. 5, Art. 84, 2022.
[7] S. Pan, L. Luo, Y. Wang, et al., "Unifying large language models and knowledge graphs: A roadmap," IEEE Transactions on Knowledge and Data Engineering, vol. 36, no. 7, pp. 3580–3599, 2024.
[8] OpenAI, "GPT-4 technical report," arXiv preprint arXiv:2303.08774, 2023.
[9] H. Touvron, T. Lavril, G. Izacard, et al., "LLaMA: Open and efficient foundation language models," arXiv preprint arXiv:2302.13971, 2023.
[10] R. Bommasani, D. A. Hudson, E. Adeli, et al., "On the opportunities and risks of foundation models," arXiv preprint arXiv:2108.07258, 2021.
[11] P. Lewis, E. Perez, A. Piktus, et al., "Retrieval-augmented generation for knowledge-intensive NLP tasks," arXiv preprint arXiv:2005.11401, 2020.
[12] D. Edge, H. Trinh, N. Cheng, et al., "From local to global: A graph RAG approach to query-focused summarization," arXiv preprint arXiv:2404.16130, 2024.
[13] D. B. Lenat, “CYC: Toward Programs with Common Sense,” Communications of the ACM, vol. 33, no. 8, pp. 30–49, 1990.
[14] H. Zhang, X. Zhao, Y. Zhou, et al., "The rise and potential of large language model based agents: A survey," arXiv preprint arXiv:2309.07864, 2024.
[15] T. Brown, J. Thorne, R. Su, and S. Liu, “Extract, define, canonicalize: An LLM-based framework for knowledge graph construction,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), 2023, pp. 3154–3166.
[16] N. F. Noy and D. L. McGuinness, “Ontology Development 101: A Guide to Creating Your First Ontology,” Stanford Knowledge Systems Laboratory, Stanford, CA, USA, Tech. Rep. KSL-01-05, 2001.
[17] O. Lassila and R. R. Swick, "Resource description framework (RDF) model and syntax specification," W3C Recommendation, Feb. 22, 1999.
[18] M. Stocker, A. Bernstein, and M. Smith, "Foundations of SPARQL query optimization," in Proceedings of the 13th International Conference on Database Theory (ICDT), New York, NY, USA: ACM, 2010, pp. 54–65.
[19] M. Zhang, Y. Liu, X. Chen, and Z. Wang, “From human experts to machines: An LLM-supported approach to ontology and knowledge graph construction,” Proceedings of the 2023 International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 112–119, 2023.
[20] I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., "Generative adversarial nets," Advances in Neural Information Processing Systems, vol. 27, pp. 2672–2680, 2014.
[21] A. Vaswani, N. Shazeer, N. Parmar, et al., "Attention is all you need," Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008, 2017.
[22] Y. Zhu, X. Wang, J. Chen, S. Qiao, Y. Ou, Y. Yao, et al., "LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities," arXiv preprint arXiv:2305.13168, 2023.
[23] H. Ye, H. Gui, A. Zhang, T. Liu, W. Hua, and W. Jia, "Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph Construction," arXiv preprint arXiv:2312.03022, 2023.
[24] T. R. Gruber, “Toward principles for the design of ontologies used for knowledge sharing,” Int. J. Hum.-Comput. Stud., vol. 43, no. 5–6, pp. 907–928, 1995.
指導教授 陳慶瀚(Ching-han Chen) 審核日期 2025-7-23 推文 plurk
funp
live
udn
HD
myshare
netvibes
friend
youpush
delicious
baidu
網路書籤 Google bookmarks
del.icio.us
hemidemi
myshare