參考文獻 |
Alhelbawy, A., Lattimer, M., Kruschwitz, U., Fox, C., & Poesio, M. (2020). An nlp-powered human rights monitoring platform. Expert Systems with Applications, 153, 113365.
Alkaraan, F., Albitar, K., Hussainey, K., & Venkatesh, V. G. (2022). Corporate transformation toward Industry 4.0 and financial performance: The influence of environmental, social, and governance (ESG). Technological Forecasting and Social Change, 175, Article 121423. https://doi.org/10.1016/j.techfore.2021.121423
Amel-Zadeh, A., & Serafeim, G. (2018). Why and how investors use ESG information: Evidence from a global survey. Financial analysts journal, 74(3), 87-103.
Arbane, M., Benlamri, R., Brik, Y., & Alahmar, A. D. (2023). Social media-based COVID-19 sentiment classification model using Bi-LSTM. Expert Systems with Applications, 212, 118710.
Aydogmus, M., Gülay, G., & Ergun, K. (2022). Impact of ESG performance on firm value and profitability. Borsa Istanbul Review, 22, S119-S127. https://doi.org/10.1016/j.bir.2022.11.006
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., & Askell, A. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.
De Vincentiis, P. (2024). ESG news, stock volatility and tactical disclosure. Research in International Business and Finance, 68, Article 102187. https://doi.org/10.1016/j.ribaf.2023.102187
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
El-Kassas, W. S., Salama, C. R., Rafea, A. A., & Mohamed, H. K. (2021). Automatic text summarization: A comprehensive survey. Expert Systems with Applications, 165, 113679.
Fraiberger, S. P., Lee, D., Puy, D., & Ranciere, R. (2021). Media sentiment and international asset prices. Journal of International Economics, 133, 103526.
Gomez, M. J., Calderón, M., Sánchez, V., Clemente, F. J. G., & Ruipérez-Valiente, J. A. (2022). Large scale analysis of open MOOC reviews to support learners’ course selection. Expert Systems with Applications, 210, 118400.
Guo, M., Ainslie, J., Uthus, D., Ontanon, S., Ni, J., Sung, Y.-H., & Yang, Y. (2021). LongT5: Efficient text-to-text transformer for long sequences. arXiv preprint arXiv:2112.07916.
Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., Casas, D. d. l., Bressand, F., Lengyel, G., Lample, G., & Saulnier, L. (2023). Mistral 7B. arXiv preprint arXiv:2310.06825.
Lee, J., & Kim, M. (2023). ESG information extraction with cross-sectoral and multi-source adaptation based on domain-tuned language models. Expert Systems with Applications, 221, 119726.
Li, Z., Peng, B., He, P., Galley, M., Gao, J., & Yan, X. (2023). Guiding Large Language Models via Directional Stimulus Prompting. arXiv preprint arXiv:2302.11520.
Lin, C.-Y. (2004). Rouge: A package for automatic evaluation of summaries. Text summarization branches out,
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Lu, J., & Eirinaki, M. (2021). Can a machine win a Grammy? An evaluation of AI-generated song lyrics. 2021 IEEE International Conference on Big Data, Big Data 2021,
Ma, C., Zhang, W. E., Guo, M., Wang, H., & Sheng, Q. Z. (2022). Multi-document summarization via deep learning techniques: A survey. ACM Computing Surveys, 55(5), 1-37.
Mathur, A., & Suchithra, M. (2022). Application of Abstractive Summarization in Multiple Choice Question Generation. 1st International Conference on Computational Intelligence and Sustainable Engineering Solution, CISES 2022,
Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams engineering journal, 5(4), 1093-1113.
Mehta, S., Sekhavat, M. H., Cao, Q., Horton, M., Jin, Y., Sun, C., Mirzadeh, I., Najibi, M., Belenko, D., & Zatloukal, P. (2024). OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework. arXiv preprint arXiv:2404.14619.
Miller, D. (2019). Leveraging BERT for extractive text summarization on lectures. arXiv preprint arXiv:1906.04165.
Oniani, D., & Wang, Y. (2020). A Qualitative Evaluation of Language Models on Automatic Question-Answering for COVID-19. 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB 2020,
Pedersen, L. H., Fitzgibbons, S., & Pomorski, L. (2021). Responsible investing: The ESG-efficient frontier. Journal of Financial Economics, 142(2), 572-597.
Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Alobeidli, H., Pannier, B., Almazrouei, E., & Launay, J. (2023). The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116.
Perez-Beltrachini, L., & Lapata, M. (2021). Multi-document summarization with determinantal point process attention. Journal of Artificial Intelligence Research, 71, 371-399.
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1), 5485-5551.
Ranade, P., Piplai, A., Mittal, S., Joshi, A., & Finin, T. (2021). Generating Fake Cyber Threat Intelligence Using Transformer-Based Models. 2021 International Joint Conference on Neural Networks, IJCNN 2021,
Rawat, R., Rawat, P., Elahi, V., & Elahi, A. (2021). Abstractive Summarization on Dynamically Changing Text. 5th International Conference on Computing Methodologies and Communication, ICCMC 2021,
Reimers, N., & Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
Salvador, J., Bansal, N., Akter, M., Sarkar, S., Das, A., & Karmaker, S. K. (2024). Benchmarking LLMs on the Semantic Overlap Summarization Task. arXiv preprint arXiv:2402.17008.
Santu, S. K. K., & Feng, D. (2023). TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks. arXiv preprint arXiv:2305.11430.
Seok, J., Lee, Y., & Kim, B. D. (2020). Impact of CSR news reports on firm value. Asia Pacific Journal of Marketing and Logistics, 32(3), 644-663. https://doi.org/10.1108/apjml-06-2019-0352
Team, G., Mesnard, T., Hardin, C., Dadashi, R., Bhupatiraju, S., Pathak, S., Sifre, L., Rivière, M., Kale, M. S., & Love, J. (2024). Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295.
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., & Bhosale, S. (2023). Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Wu, Z., & Ma, G. (2024). NLP-based approach for automated safety requirements information retrieval from project documents. Expert Systems with Applications, 239, 122401.
Xiao, W., Beltagy, I., Carenini, G., & Cohan, A. (2021). PRIMERA: Pyramid-based masked sentence pre-training for multi-document summarization. arXiv preprint arXiv:2110.08499.
Zhang, J., Zhao, Y., Saleh, M., & Liu, P. (2020). Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. International Conference on Machine Learning,
Zhang, T., Kishore, V., Wu, F., Weinberger, K. Q., & Artzi, Y. (2019). Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. |