參考文獻 |
[1] Syntactic Structures. Mouton Publishers, The Hague, Paris, 1957.
[2] Aspects of the Theory of Syntax. MIT Press, 1965.
[3] The One vs. the Many : Minor Characters and the Space of the Protagonist in the Novel. 2003.
[4] Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Sofia, Bulgaria, aug 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P13-1129.
[5] Proceedings of the 15th Conference of the European Chapter of the Assocation for Computational Linguistics, volume Volume 1, Long Papers, April 2017.
[6] Apoorv Agarwal, Sriramkumar Balasubramanian, Jiehan Zheng, and Sarthak Dash. Parsing screenplays for extracting social networks from movies. In CLfL@EACL, 2014.
[7] Arthur Amalvy. Visualisation de relations entre personnages à l’aide de techniques de traitement du langage. Technical Report, University of Technology of Belfort-Montbéliard, 2019.
[8] Mathieu Bastian, Sebastien Heymann, and Mathieu Jacomy. Gephi: open source software for exploring and manipulating networks, 2009. An URL http://www.aaai.org/ocs/index.php/ICWSM/09/paper/view/154.
[9] Anthony Bonato, David Ryan D’Angelo, Ethan R. Elenberg, David F. Gleich, and Yangyang Hou. Mining and modeling character networks. ArXiv, abs/1608.00646, 2016.
[10] Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. ArXiv, abs/1409.1259, 2014.
[11] Hutto C.J. and Gilber Eric. Vader: A parsimonious rule-based model for sentiment analysis of social media text. 2014.
[12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019.
[13] Adam Ek, Mats Wirén, Robert Östling, Kristina Nilsson Björkenstam, Gintare Grigonyte, and Sofia Gustafson-Capková. Identifying speakers and addressees in dialogues extracted from literary fiction. In LREC, 2018.
[14] Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179–211, 1990. doi: 10.1207/s15516709cog1402_1.
[15] David Elson and Kathleen McKeown. Automatic attribution of quoted speech in literary narrative. 01 2010.
[16] Assocation for Computational Linguistics, editor. Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2012.
[17] Santo Fortunato. Community detection in graphs. ArXiv, abs/0906.0612, 2009.
[18] Sebastian Gil, Laney Kuenzel, and Caroline Suen. Extraction and analysis from plays and movies. Technical report, Stanford University, 2011.
[19] Kevin R. Glass and Shaun Bangay. A naïve, salience-based method for speaker identification in fiction books. 2007.
[20] Hua He, Denilson Barbosa, and Grzegorz Kondrak. Identification of speakers in novels. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) acl [4], pages 1312–1320. URL https://www.aclweb.org/anthology/P13-1129.
[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learn-
ing for image recognition. 2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), pages 770–778, 2016.
[22] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9:1735–80, 12 1997. doi: 10.1162/neco.1997.9.8.1735.
[23] Sarthak Jain and Byron C. Wallace. Attention is not explanation. ArXiv, abs/1902.10186, 2019.
[24] Zhengbao Jiang, Wei Xu, Jun Araki, and Graham Neubig. Generalizing natural language analysis through span-relation representations. ArXiv, abs/1911.03822, 2019.
[25] Sethunya Joseph, Kutlwano Sedimo, Freeson Kaniwa, Hlomani Hlomani, and Keletso Letsholo. Natural language processing: A review. Natural Language Processing: A Review, 6:207–210, 03 2016.
[26] Mandar Joshi, Omer Levy, Daniel S. Weld, and Luke Zettlemoyer. Bert for corefer ence resolution: Baselines and analysis. In EMNLP/IJCNLP, 2019.
[27] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77, 2020. [28] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, page 2342–2350. JMLR.org, 2015.
[29] Vincent Labatut and Xavier Bost. Extraction and analysis of fictional character networks : A survey. ACM Computing Surveys, 2019.
[30] Elizabeth D. Liddy. Natural language processing. 2001.
[31] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692, 2019.
[32] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60, 2014. URL http://www.aclweb.org/anthology/P/P14/P14-5010.
[33] Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
[34] George A. Miller. Wordnet : A lexical database for english. In Communications of the ACM, 1995.
[35] Franco Moretti. Network theory, plot analysis. New Left Review, 2011.
[36] Grace Muzny, Michael Fang, Angel X. Chang, and Dan Jurafsky. A two-stage sieve approach for quote attribution. In Proceedings of the 15th Conference of the European Chapter of the Assocation for Computational Linguistics eac [5], pages 460–470.
[37] Timothy O’Keefe, Silvia Pareti, James R. Curran, Irena Koprinska, and Mathew Honnibak. A sequence labelling approach to quote attribution. In for Computational Linguistics [16], pages 790–799.
[38] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In ICML, 2013.
[39] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. ArXiv, abs/1802.05365, 2018.
[40] Alec Radford, Karthik Narasimhan, Tim Salimans, and Sutskever Ilya. Improving language understanding by generative pre-training. 2018.
[41] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
[42] Yannick Rochat. Character Networks and Centrality. PhD thesis, Université de Lausanne, 2014.
[43] Alexander Rush. The annotated transformer. pages 52–60, 01 2018. doi: 10.18653/v1/W18-2509.
[44] Lloyd P. Stuart. Least squares quantization. pcm.IEEE Transactions on Information Theory 28, 2:129–136, 1982.
[45] Wilson L. Taylor. "cloze procedure": a new tool for measuring readability. Journalism & Mass Communication Quarterly, 30:415–433, 1953.
[46] Hardik Vala, Stefan Dimitrov, David Jurgens, Andrew Piper, and Derek Ruths. Annotating characters in literary corpora: A scheme, the CHARLES tool, and an annotated novel. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 184–189, Portorož, Slovenia, May 2016. European Language Resources Association (ELRA). URL https://www.aclweb.org/anthology/L16-1028.
[47] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. ArXiv, abs/1706.03762, 2017.
[48] Jesse Vig. A multiscale visualization of attention in the transformer model. In ACL, 2019.
[49] Weizenbaum. Eliza - a computer program for the study of natural language communication between man and machine. Communications of the Association for Computing Machinery 9, pages 36–45, 1966.
[50] Sarah Wiegreffe and Yuval Pinter. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11–20, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1002. URL https://www.aclweb.org/anthology/D19-1002.
[51] P. Simard Y. Bengio and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, March 1994.
[52] Rochat Yannick and Triclot Mathieu. Les réseaux de personnages de science-fiction : échantillons de lectures intermédiaires. ReS Futurae, 2017.
[53] Chak Yan Yeung and John Lee. Identifying speakers and listeners of quoted speech in literary works. In IJCNLP, 2017.
[54] Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. Recent trends in deep learning based natural language processing. IEEE Computational Intelligence Magazine, 13:55–75, 2018. |