博碩士論文 111522155 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:153 、訪客IP:3.145.58.12
姓名 洪閔昭(Min-Chao Hung)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 中文文章級別人物關係擷取之研究
(Research on Document-Level Person Relation Extraction in Chinese)
相關論文
★ 行程邀約郵件的辨識與不規則時間擷取之研究★ NCUFree校園無線網路平台設計及應用服務開發
★ 網際網路半結構性資料擷取系統之設計與實作★ 非簡單瀏覽路徑之探勘與應用
★ 遞增資料關聯式規則探勘之改進★ 應用卡方獨立性檢定於關連式分類問題
★ 中文資料擷取系統之設計與研究★ 非數值型資料視覺化與兼具主客觀的分群
★ 關聯性字組在文件摘要上的探討★ 淨化網頁:網頁區塊化以及資料區域擷取
★ 問題答覆系統使用語句分類排序方式之設計與研究★ 時序資料庫中緊密頻繁連續事件型樣之有效探勘
★ 星狀座標之軸排列於群聚視覺化之應用★ 由瀏覽歷程自動產生網頁抓取程式之研究
★ 動態網頁之樣版與資料分析研究★ 同性質網頁資料整合之自動化研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2024-12-31以後開放)
摘要(中) 本研究的動機在於構建一套聯合實體關係擷取的架構,使其能夠應用於
真實的網路資料中。目前現有的資料集通常來自單一資料源,如維基百科等,
因此,這些資料集所訓練出來的模型難以泛化到多樣化的網絡內容。此外,現
有資料集主要集中在句子級別,而跨句子、跨段落的實體關係識別在現實應用
中更為常見,但針對文章級別的關係擷取任務研究相對不足。針對中文資料集
的缺乏,我們利用先進的大型語言模型來協助標記資料,推動中文關係擷取研
究的進展。
我們的研究提出了一個通用式生成的標記流程,通過使用 Gemini 以及
GPT-3.5 等大型語言模型協助標記未標記的文章級內容,這樣可以節省大量人
力和時間資源,並提高標記效率和準確性。我們利用 Common Crawl 數據作
為標記資料集的資料庫來源,構建了一個更具泛用性的資料集,解決了傳統資
料集來源單一的問題。此外,得益於大型語言模型(LLM)能力的增強,我們
實驗將篇幅較大的文章放入模型進行處理,也確實擷取出了約 30% 的跨句子
關係。
為了解決單一模型的盲點,我們採用了交叉驗證的方式來提高標記結果
的可信度,並且引入了實體擴充方法,補足了模型在面對大量實體時所面臨的
實體對取樣不足問題,從而擴充了我們整體標記的完整性。
最後,我們還利用參數量較小的預訓練模型對我們標記的資料集進行微
調,評估其在真實網路資料中的效能。這樣的微調過程不僅能夠檢驗標記資料
集的品質,還能進一步提升模型在真實網絡環境下的適應能力
總體而言,我們的研究在技術方法上有所創新,並為未來的關係擷取和
命名實體識別研究提供了新的思路和資源。我們期待這些新方法和資源能夠在
多樣化的網絡數據中得到更廣泛的應用和驗證,推動該領域的進一步發展。
摘要(英) The motivation of this study is to construct a joint entity-relation extraction
framework for real-world web data. Existing datasets typically come from single
sources like Wikipedia, making models trained on them struggle to generalize to
diverse web content. Additionally, these datasets focus mainly on sentence-level
information, while cross-sentence and cross-paragraph entity-relationship recog-
nition is more common in real applications. However, research on document-level
relationship extraction is insufficient. To address the lack of Chinese datasets, we
leverage advanced large language models for data annotation, advancing research
in Chinese relationship extraction.
Our study proposes a universal generative annotation process, using large
language models such as Gemini and GPT-3.5 to annotate unmarked document-
level content. This approach saves significant human and time resources while
improving annotation efficiency and accuracy. We use Common Crawl data as
the source for our dataset, creating a more versatile dataset and addressing the
issue of single-source datasets. Thanks to the enhanced capabilities of large
language models (LLMs), we experimented with processing longer documents
and successfully extracted approximately 30% of cross-sentence relationships.
To address the limitations of a single model, we adopted a cross-validation
approach to improve annotation credibility. We also introduced an entity aug-
mentation method to address the issue of insufficient entity pair sampling, en-
hancing overall annotation completeness.
Finally, we fine-tuned our dataset using smaller parameter pre-trained mod-
els to evaluate its performance on real web data. This fine-tuning process tests
the quality of the dataset and enhances the model’s adaptability to different web
environments.
Overall, our study introduces innovative technical methods and provides
new ideas and resources for future research in relationship extraction and named
entity recognition. We anticipate broader application and validation in diverse
web data, promoting further development in this field.
關鍵字(中) ★ 關係擷取
★ 文章級關係擷取
★ 命名實體識別
★ 聯合實體關係擷取
關鍵字(英) ★ Relation Extraction
★ Document-level Relation Extraction
★ Named Entity Recognitio
★ Joint Entity and Relation Extraction
論文目次 目 錄
摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
表目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
一、 緒論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1-1 動機 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1-2 目標 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1-3 貢獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
二、 相關研究 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2-1 傳統聯合實體關係擷取 . . . . . . . . . . . . . . . . . . . . 4
2-2 生成式聯合實體關係擷取 . . . . . . . . . . . . . . . . . . . 5
三、 資料庫前處理 . . . . . . . . . . . . . . . . . . . . . . . . . 6
3-1 Common Crawl 數據庫前處理 . . . . . . . . . . . . . . . . 6
3-1-1 去重 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3-1-2 語言辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3-1-3 品質篩選 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3-2 前處理資料統計 . . . . . . . . . . . . . . . . . . . . . . . . 7
四、 大型語言模型標記流程 . . . . . . . . . . . . . . . . . . . . 9
4-1 文章級別挑戰 . . . . . . . . . . . . . . . . . . . . . . . . . 10
4-2 三元組生成 . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4-3 關係分類 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4-4 交叉驗證 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4-5 合併資料 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
五、 效能評估與實體擴充 . . . . . . . . . . . . . . . . . . . . . 17
5-1 NER 效能評估 . . . . . . . . . . . . . . . . . . . . . . . . . 17
5-2 幻想評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5-3 跨句評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5-4 實體關係擴充 . . . . . . . . . . . . . . . . . . . . . . . . . 19
5-5 實體對評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
六、 模型訓練 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6-1 通用式生成 . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6-2 pepeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6-3 Gemini 實體擴充 . . . . . . . . . . . . . . . . . . . . . . . 24
七、 結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
索引 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
參考文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
參考文獻 參 考 文 獻
[1] Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han,
Le Sun, and Hua Wu. Unified structure generation for universal information
extraction. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio,
editors, Proceedings of the 60th Annual Meeting of the Association for Com-
putational Linguistics (Volume 1: Long Papers), pages 5755–5772, Dublin,
Ireland, May 2022. Association for Computational Linguistics.
[2] Somin Wadhwa, Silvio Amir, and Byron Wallace. Revisiting relation ex-
traction in the era of large language models. In Anna Rogers, Jordan
Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long
Papers), pages 15566–15589, Toronto, Canada, July 2023. Association for
Computational Linguistics.
[3] Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda.
ACE 2005 multilingual training corpus. In Linguistic Data Consortium,
2006.
[4] Dan Roth and Wen-tau Yih. A linear programming formulation for global
inference in natural language tasks. In Proceedings of the Eighth Confer-
ence on Computational Natural Language Learning (CoNLL-2004) at HLT-
NAACL 2004, pages 1–8, Boston, Massachusetts, USA, May 6 - May 7 2004.
Association for Computational Linguistics.
[5] Sebastian Riedel, Limin Yao, and Andrew McCallum. Modeling relations
and their mentions without labeled text. In Machine Learning and Knowl-
edge Discovery in Databases: European Conference, ECML PKDD 2010,
Barcelona, Spain, September 20-24, 2010, Proceedings, Part III 21, pages
148–163. Springer, 2010.
[6] Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu,
Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. DocRED: A large-
scale document-level relation extraction dataset. In Anna Korhonen, David Traum, and Lluís Màrquez, editors, Proceedings of the 57th Annual Meeting
of the Association for Computational Linguistics, pages 764–777, Florence,
Italy, July 2019. Association for Computational Linguistics.
[7] Youmi Ma, An Wang, and Naoaki Okazaki. DREEAM: Guiding attention
with evidence for improving document-level relation extraction. In Andreas
Vlachos and Isabelle Augenstein, editors, Proceedings of the 17th Conference
of the European Chapter of the Association for Computational Linguistics,
pages 1971–1983, Dubrovnik, Croatia, May 2023. Association for Computa-
tional Linguistics.
[8] Guoquan Dai, Xizhao Wang, Xiaoying Zou, Chao Liu, and Si Cen. Mrgat:
Multi-relational graph attention network for knowledge graph completion.
Neural Networks, 154:234–245, 2022.
[9] Linfeng Li, Peng Wang, Jun Yan, Yao Wang, Simin Li, Jinpeng Jiang, Zhe
Sun, Buzhou Tang, Tsung-Hui Chang, Shenghui Wang, and Yuting Liu.
Real-world data medical knowledge graph: construction and applications.
Artificial Intelligence in Medicine, 103:101817, 2020.
[10] Jung-Jun Kim, Dong-Gyu Lee, Jialin Wu, Hong-Gyu Jung, and Seong-Whan
Lee. Visual question answering based on local-scene-aware referring expres-
sion generation. Neural Networks, 139:158–167, 2021.
[11] Apoorv Saxena, Aditay Tripathi, and Partha Talukdar. Improving multi-
hop question answering over knowledge graphs using knowledge base embed-
dings. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault,
editors, Proceedings of the 58th Annual Meeting of the Association for Com-
putational Linguistics, pages 4498–4507, Online, July 2020. Association for
Computational Linguistics.
[12] Marco Antonio Calijorne Soares and Fernando Silva Parreiras. A literature
review on question answering techniques, paradigms and systems. Journal of
King Saud University - Computer and Information Sciences, 32(6):635–646,
2020.
[13] Weizhao Li, Feng Ge, Yi Cai, and Da Ren. A conversational model for
eliciting new chatting topics in open-domain conversation. Neural Networks,
144:540–552, 2021.
[14] Yunyi Yang, Yunhao Li, and Xiaojun Quan. Ubar: Towards fully end-to-end
task-oriented dialog system with gpt-2. Proceedings of the AAAI Conference
on Artificial Intelligence, 35(16):14230–14238, May 2021.
[15] Ziran Li, Ning Ding, Zhiyuan Liu, Haitao Zheng, and Ying Shen. Chinese
relation extraction with multi-grained information and external linguistic
knowledge. In Anna Korhonen, David Traum, and Lluís Màrquez, editors,
Proceedings of the 57th Annual Meeting of the Association for Computational
Linguistics, pages 4377–4386, Florence, Italy, July 2019. Association for
Computational Linguistics.
[16] Jiaqi Hou, Xin Li, Haipeng Yao, Haichun Sun, Tianle Mai, and Rongchen
Zhu. Bert-based chinese relation extraction for public security. IEEE Access,
8:132367–132375, 2020.
[17] Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. Distant super-
vision for relation extraction without labeled data. In Keh-Yih Su, Jian Su,
Janyce Wiebe, and Haizhou Li, editors, Proceedings of the Joint Conference
of the 47th Annual Meeting of the ACL and the 4th International Joint Con-
ference on Natural Language Processing of the AFNLP, pages 1003–1011,
Suntec, Singapore, August 2009. Association for Computational Linguistics.
[18] Ang Sun, Ralph Grishman, and Satoshi Sekine. Semi-supervised relation ex-
traction with large-scale word clustering. In Dekang Lin, Yuji Matsumoto,
and Rada Mihalcea, editors, Proceedings of the 49th Annual Meeting of the
Association for Computational Linguistics: Human Language Technologies,
pages 521–529, Portland, Oregon, USA, June 2011. Association for Compu-
tational Linguistics.
[19] Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. GraphRel: Modeling text as
relational graphs for joint entity and relation extraction. In Anna Korhonen,
David Traum, and Lluís Màrquez, editors, Proceedings of the 57th Annual
Meeting of the Association for Computational Linguistics, pages 1409–1418,
Florence, Italy, July 2019. Association for Computational Linguistics.
[20] Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man
Lan, Shiliang Sun, and Nan Duan. Joint type inference on entities and rela-
tions via graph convolutional networks. In Anna Korhonen, David Traum,
and Lluís Màrquez, editors, Proceedings of the 57th Annual Meeting of the
Association for Computational Linguistics, pages 1361–1370, Florence, Italy,
July 2019. Association for Computational Linguistics.
[21] Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. A novel
cascade binary tagging framework for relational triple extraction. In Dan
Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors, Pro-
ceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1476–1488, Online, July 2020. Association for Computa-
tional Linguistics.
[22] Bowen Yu, Zhenyu Zhang, Xiaobo Shu, Yubin Wang, Tingwen Liu, Bin
Wang, and Sujian Li. Joint extraction of entities and relations based on a
novel decomposition strategy. In Proc. of ECAI, 2020.
[23] Hengyi Zheng, Rui Wen, Xi Chen, Yifan Yang, Yunyan Zhang, Ziheng
Zhang, Ningyu Zhang, Bin Qin, Xu Ming, and Yefeng Zheng. PRGC: Poten-
tial relation and global correspondence based joint relational triple extrac-
tion. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors,
Proceedings of the 59th Annual Meeting of the Association for Computa-
tional Linguistics and the 11th International Joint Conference on Natural
Language Processing (Volume 1: Long Papers), pages 6225–6235, Online,
August 2021. Association for Computational Linguistics.
[24] Feiliang Ren, Longhui Zhang, Xiaofeng Zhao, Shujuan Yin, Shilei Liu, and
Bochao Li. A simple but effective bidirectional framework for relational
triple extraction, 2022.
[25] Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, and Weiming Lu. De-
bias for generative extraction in unified NER task. In Smaranda Muresan,
Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th An-
nual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 808–818, Dublin, Ireland, May 2022. Association for
Computational Linguistics.
[26] Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu,
and Limin Sun. TPLinker: Single-stage joint extraction of entities and rela-
tions through token pair linking. In Donia Scott, Nuria Bel, and Chengqing
Zong, editors, Proceedings of the 28th International Conference on Compu-
tational Linguistics, pages 1572–1582, Barcelona, Spain (Online), December
2020. International Committee on Computational Linguistics.
[27] Feiliang Ren, Longhui Zhang, Shujuan Yin, Xiaofeng Zhao, Shilei Liu,
Bochao Li, and Yaduo Liu. A novel global feature-oriented relational triple
extraction model based on table filling. In Marie-Francine Moens, Xuan-
jing Huang, Lucia Specia, and Scott Wen-tau Yih, editors, Proceedings of
the 2021 Conference on Empirical Methods in Natural Language Processing,
pages 2646–2656, Online and Punta Cana, Dominican Republic, November
2021. Association for Computational Linguistics.
[28] Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei Li, and Junchi Yan.
UniRE: A unified label space for entity relation extraction. In Chengqing
Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Proceedings of the
59th Annual Meeting of the Association for Computational Linguistics and
the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 220–231, Online, August 2021. Association
for Computational Linguistics.
[29] Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. Ex-
tracting relational facts by an end-to-end neural model with copy mecha-
nism. In Iryna Gurevych and Yusuke Miyao, editors, Proceedings of the 56th
Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 506–514, Melbourne, Australia, July 2018. Association
for Computational Linguistics.
[30] Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang,
Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unified language model
pre-training for natural language understanding and generation. Advances
in neural information processing systems, 32, 2019.
[31] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrah-
man Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: De-
noising sequence-to-sequence pre-training for natural language generation,
translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
[32] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the lim-
its of transfer learning with a unified text-to-text transformer. Journal of
machine learning research, 21(140):1–67, 2020.
[33] Hongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei
Huang, and Huajun Chen. Contrastive triple extraction with generative
transformer. In Proceedings of the AAAI conference on artificial intelligence,
volume 35, pages 14257–14265, 2021.
[34] Pere-Lluís Huguet Cabot and Roberto Navigli. REBEL: Relation extraction
by end-to-end language generation. In Marie-Francine Moens, Xuanjing
Huang, Lucia Specia, and Scott Wen-tau Yih, editors, Findings of the As-
sociation for Computational Linguistics: EMNLP 2021, pages 2370–2381,
Punta Cana, Dominican Republic, November 2021. Association for Compu-
tational Linguistics.
[35] Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin
Zhou, and Jiwei Li. Entity-relation extraction as multi-turn question an-
swering. In Anna Korhonen, David Traum, and Lluís Màrquez, editors,
Proceedings of the 57th Annual Meeting of the Association for Computa-
tional Linguistics, pages 1340–1350, Florence, Italy, July 2019. Association
for Computational Linguistics.
[36] Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen
Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, et al. Zero-
shot information extraction via chatting with chatgpt. arXiv preprint
arXiv:2302.10205, 2023.
[37] Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro
Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang, and
Stefano Soatto. Structured prediction as translation between augmented
natural languages. In 9th International Conference on Learning Represen-
tations, ICLR 2021, 2021.
[38] Hao Fei, Shengqiong Wu, Jingye Li, Bobo Li, Fei Li, Libo Qin, Meishan
Zhang, Min Zhang, and Tat-Seng Chua. Lasuie: Unifying information ex-
traction with latent adaptive structure-aware generative language model.
Advances in Neural Information Processing Systems, 35:15460–15475, 2022.
[39] Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaud-
hary, Francisco Guzmán, Armand Joulin, and Edouard Grave. CCNet:
Extracting high quality monolingual datasets from web crawl data. In Nico-
letta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christo-
pher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard,
Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, and Stelios
Piperidis, editors, Proceedings of the Twelfth Language Resources and Eval-
uation Conference, pages 4003–4012, Marseille, France, May 2020. European
Language Resources Association.
[40] Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve
Jégou, and Tomas Mikolov. Fasttext.zip: Compressing text classification
models. arXiv preprint arXiv:1612.03651, 2016.
[41] Hui Wu, Yuting He, Yidong Chen, Yu Bai, and Xiaodong Shi. Improv-
ing few-shot relation extraction through semantics-guided learning. Neural
Networks, 169:453–461, 2024.
[42] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou,
Aditya Siddhant, Aditya Barua, and Colin Raffel. mT5: A massively multilingual pre-trained text-to-text transformer. In Kristina Toutanova,
Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven
Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, editors,
Proceedings of the 2021 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies,
pages 483–498, Online, June 2021. Association for Computational Linguis-
tics.
指導教授 張嘉惠(Chia-Hui Chang) 審核日期 2024-8-20
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明