參考文獻 |
[1] W. Yun, X. Zhang, Z. Li, H. Liu, and M. Han, “Knowledge modeling: A survey of processes and techniques”, in International Journal of Intelligent Systems, vol. 36, no. 4, pp. 1686–1720, 2021.
[2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., “Attention is all you need”, Advances in Neural Information Processing Systems 30, 2017, pp. 5998–6008.
[3] M. Shanahan, “Talking about large language models”, arXiv preprint arXiv:2212.03551, 2022.
[4] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, et al., “GPT-4 technical report”, arXiv preprint arXiv:2303.08774, 2023.
[5] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, et al., “LLaMA: Open and Efficient Foundation Language Models”, arXiv preprint arXiv:2302.13971, 2023.
[6] R. Thoppilan, D. D. Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H.-T. Cheng, et al., “LaMDA: Language Models for Dialog Applications”, arXiv preprint arXiv:2201.08239, 2022.
[7] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, et al., “PaLM: Scaling Language Modeling with Pathways”, Journal of Machine Learning Research, vol. 24, no. 240, pp. 1-113, 2023.
[8] A timeline of Google’s biggest AI and ML moments. Accessed: Jan. 23, 2024. [Online]. Available: https://blog.google/technology/ai/google-ai-ml-timeline/
[9] S. Mandvikar, “Augmenting Intelligent Document Processing (IDP) Workflows with Contemporary Large Language Models (LLMs)”, International Journal of Computer Trends and Technology, vol.71, no. 10, pp. 80-91, 2023.
[10] V. Bilgram and F. Laarmann, “Accelerating Innovation With Generative AI: AI-Augmented Digital Prototyping and Innovation Methods”, in IEEE Engineering Management Review, vol. 51, no. 2, pp. 18-25, 2023.
[11] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, et al., “Survey of Hallucination in Natural Language Generation”, in ACM Computing Surveys, vol. 55, no. 12, pp. 1-38, 2023.
[12] K. Liang, Z. Zhang, and J. F. Fisac, “Introspective Planning: Guiding Language-Enabled Agents to Refine Their Own Uncertainty”, arXiv preprint arXiv:2402.06529, 2024.
[13] A. Pal, L. K. Umapathi, and M. Sankarasubb, “Med-HALT: Medical Domain Hallucination Test for Large Language Models”, in Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL), 2023, pp. 314-334.
[14] Y. Yao, J. Duan, K. Xu, Y. Cai, Z. Sun and Y. Zhang, “A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly”, arXiv preprint arXiv:2312.02003, 2023.
[15] N. Carlini, F. Tramèr, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, et al., “Extracting training data from large language models”, in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2633-2650.
[16] H. Zhao, H. Chen, F. Yang, N. Liu, H. Deng, H. Cai, et al., “Explainability for Large Language Models: A Survey”, ACM Transactions on Intelligent Systems and Technology, vol.15, no.2, pp. 1-38, 2024.
[17] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, et al., “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks”, in 34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020, pp. 9459–9474.
[18] O. Ovadia, M. Brief, M. Mishaeli, and O. Elisha, “Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs”, arXiv preprint arXiv:2312.05934, 2024.
[19] S. Barnett, S. Kurniawan, S. Thudumu, Z. Brannelly, and M. Abdelrazek, “Seven Failure Points When Engineering a Retrieval Augmented Generation System”, arXiv preprint arXiv:2401.05856, 2024.
[20] P. BehnamGhader, S. Miret, and S. Reddy, “Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model”, arXiv preprint arXiv:2212.09146, 2022.
[21] J. Chen, H. Lin, X. Han, and L. Sun, “Benchmarking Large Language Models in Retrieval-Augmented Generation”, in AAAI, vol. 38, no. 16, pp. 17754-17762, 2024.
[22] Y. Gao, Y. Xiong, X. Gao, K. Jia, J. Pan, Y. Bi, et al., “Retrieval-Augmented Generation for Large Language Models: A Survey”, arXiv preprint arXiv:2312.10997, 2024.
[23] S. Ji, S. Pan, E. Cambria, P. Marttinen, and P. S. Yu, “A Survey on Knowledge Graphs: Representation, Acquisition and Applications”, in IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 2, pp. 494-514, 2022.
[24] Z. Ji, Z. Liu, N. Lee, T. Yu, B. Wilie, M. Zeng, et al., “RHO (ρ): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding”, arXiv preprint arXiv:2212.01588, 2022.
[25] S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, and X. Wu, “Unifying Large Language Models and Knowledge Graphs: A Roadmap”, in IEEE Transactions on Knowledge and Data Engineering, 2024.
[26] C. Peng, F. Xia, M. Naseriparsa, and F. Osborne, “Knowledge Graphs: Opportunities and Challenges”, Artif. Intell. Rev., vol. 56, pp. 13071-13102, 2023.
[27] F. Kitsios and M. Kamariotou, “Artificial Intelligence and Business Strategy towards Digital Transformation: A Research Agenda”, Sustainability, vol. 13, no. 4, pp. 2025, 2021.
[28] Y. Xu, X. Liu, X. Cao, C. Huang, E. Liu, S. Qian, et al., “Artificial intelligence: A powerful paradigm for scientific research”, Innovation, vol. 2, no. 4, 2021.
[29] D. Mhlanga, “Industry 4.0 in Finance: The Impact of Artificial Intelligence (AI) on Digital Financial Inclusion”, International Journal of Financial Studies, vol. 8, no. 3, pp. 45, 2020.
[30] Introducing ChatGPT. Accessed: Apr. 09, 2024. [Online]. Available: https://openai.com/blog/chatgpt
[31] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, et al., “Generative Adversarial Networks”, in Proc. Int. Conf. Neural Inf. Process. Syst., pp. 2672-2680, 2014.
[32] S. Islam, H. Elmekki, A. Elsebai, J. Bentahar, N. Drawel, G. Rjoub, et al., “A Comprehensive Survey on Applications of Transformers for Deep Learning Tasks”, in Expert Systems with Applications, vol. 241, pp. 122666, 2023.
[33] G. Iglesias, E. Talavera, and A. Díaz-Álvarez, “A survey on GANs for computer vision: Recent research, analysis and taxonomy”, in Computer Science Review, vol. 48, pp. 100553, 2023.
[34] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., “Attention is all you need”, Advances in Neural Information Processing Systems 30, pp. 5998–6008, 2017.
[35] R. Bommasani, D. Hudson, E. Adeli, R. Altman, S. Arora, S. v. Arx, et al., “On the Opportunities and Risks of Foundation Models”, arXiv preprint arXiv:2108.07258, 2021.
[36] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, et al., “A Survey of Large Language Models”, arXiv preprint arXiv:2303.18223, 2023.
[37] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, et al., “Sparks of Artificial General Intelligence: Early experiments with GPT-4”, arXiv preprint arXiv:2303.12712, 2023.
[38] OpenAI (2023), “GPT-4 Technical Report”, arXiv preprint arXiv:2303.08774, 2023.
[39] Z. Xu, S. Jain, and M. Kankanhalli, “Hallucination is Inevitable: An Innate Limitation of Large Language Models”, arXiv preprint arXiv:2401.11817, 2024.
[40] Introducing the Knowledge Graph: things, not strings. Accessed: Mar. 05, 2024. [Online]. Available: https://blog.google/products/search/introducing-knowledge-graph-things-not/
[41] Neo4j. Accessed: Mar. 06, 2024. [Online]. Available: https://neo4j.com/
[42] P. Liu, Y. Huang, P. Wang, Q. Zhao, J. Nie, Y. Tang, et al., “Construction of typhoon disaster knowledge graph based on graph database Neo4j”, in 2020 Chinese Control And Decision Conference (CCDC), pp. 3612-3616, 2020.
[43] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. Mcleavey, and I. Sutskever, “Robust Speech Recognition via Large-Scale Weak Supervision”, in Proceedings of the 40th International Conference on Machine Learning(PMLR), Honolulu, Hawaii, USA. vol.202, pp. 28492-28518, 2023.
[44] 司法院裁判書系統. Accessed: Apr. 05, 2024. [Online]. Available: https://judgment.judicial.gov.tw/FJUD/default.aspx
[45] Introducing Meta Llama 3: The most capable openly available LLM to date. Accessed: Jun. 07, 2024. [Online]. Available: https://ai.meta.com/blog/meta-llama-3/
[46] C.-H. Chen, M.-Y. Lin, and X.-C. Guo, “High-level modeling and synthesis of smart sensor networks for Industrial Internet of Things”, Computers & Electrical Engineering, vol. 61, pp. 48-66, 2017.
[47] The world’s most walkable cities revealed (and they aren’t in the US). Accessed: Jun. 10, 2024. [Online]. Available: https://edition.cnn.com/travel/travel-news-walkable-cities
[48] Z. Fan and C. Chen. “CuPe-KG: Cultural perspective–based knowledge graph construction of tourism resources via pretrained language models”, Information Processing & Management, vol.61, no.3, 2024.
[49] X. Wang, L. Chen, T. Ban, M. Usman, Y. Guan, S. Liu, et al., “Knowledge graph quality control: A survey”, Fundamental Research, vol.1, no.5, pp. 607-626, 2021.
[50] B. Xue and L. Zou, “Knowledge Graph Quality Management: A Comprehensive Survey”, in IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 5, pp. 4969-4988, 2023. |