參考文獻 |
[1] IATA, "Air passenger market analysis December 2020," https://www.iata.org/economicsair-passenger-market-analysis-december-2020/. Accessed: Mar. 17, 2024.
[2] IATA, "Air passenger market analysis December 2023". https://www.iata.org/en/iata-repository/publications/economic-reports/air-passenger-market-analysis-december-2023/. Accessed: Mar. 17, 2024.
[3] S. Hong, M. Savoie, S. Joiner, and T. Kincaid, "Analysis of airline employees′ perceptions of corporate preparedness for COVID-19 disruptions to airline operations," Transport Policy, vol. 119, pp. 45-55, 2022.
[4] J. Doe and J. Smith, "The impact of digital transformation on the retailing value chain," Journal of Business Research, vol. 125, pp. 10-20, 2023.
[5] A. Haleem, M. Javaid, M. Qadri, R. Singh, and R. Suman, "Artificial Intelligence (AI) applications for marketing: A literature-based study," International Journal of Intelligent Networks, 2022, vol. 3, pp. 119–132.
[6] H. Gil-Gomez, V. Guerola-Navarro, R. Oltra-Badenes, and J. A. Lozano-Quilis, "Customer relationship management: Digital transformation and sustainable business model innovation," Economic Research-Ekonomska Istraživanja, 2020, vol. 33, no. 1, pp. 2733–2750.
[7] E. Ernawati, S. Baharin, and F. Kasmin, "A review of data mining methods in RFM-based customer segmentation," Journal of Physics: Conference Series, 2021, vol. 1869, no. 1, p. 012085.
[8] Gartner, "Gartner predicts conversational AI will reduce contact center agent labor costs by $80 billion in 2026". https://www.gartner.com/en/newsroom/press-releases/2022-08-31-gartner-predicts-conversational-ai-will-reduce-contac. Accessed: Mar. 17, 2024.
[9] C. Bascur and C. Rusu, "Customer experience in retail: A systematic literature review," Applied sciences, vol. 10, no. 21, p. 7644, 2020.
[10] Gartner, "Gartner says conversational AI capabilities will help drive worldwide contact center market to 16% growth in 2023". https://www.gartner.com/en/newsroom/press-releases/2023-07-31-gartner-says-conversational-ai-capabilities-will-help-drive-worldwide-contact-center-market-to-16-percent-growth-in-2023. Accessed: Mar. 17, 2024.
[11] J. Berg, E. Buesing, P. Hurst, V. Lai, and S. Mukhopadhyay, "The state of customer care in 2022," McKinsey & Company". https://www.mckinsey.com/capabilities/operations/our-insights/the-state-of-customer-care-in-2022. Accessed: Mar. 17, 2024.
[12] S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao, "Deep learning--based text classification: a comprehensive review," ACM computing surveys (CSUR), 2021, vol. 54, no. 3, pp. 1-40.
[13] M. Al-Ayyoub, H. Seelawi, M. Zaghlol, H. Al-Natsheh, S. Suileman, A. Fadel, R. Badawi, A. Morsy, I. Tuffaha and M. Alijarrah, "Overview of the mowjaz multi-topic labelling task," in 2021 12th International Conference on Information and Communication Systems (ICICS), 2021, pp. 502-508.
[14] H. Hardy, K. Baker, L. Devillers, L. Lamel, S. Rosset, T. Strzalkowski, and N. Webb, "Multi-layer dialogue annotation for automated multilingual customer service," in Proc. of the ISLE Workshop on Dialogue Tagging for Multimodal Human Computer Interaction, Edinburgh, Dec. 2002.
[15] X. Zhang, J. Chen, R. Zheng, L. Li, X. Wang, and S. Lei, "A Multi-level and Multi-label Annotation Strategy for User Questions in ICT Customer Service," in 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 2020, vol. 1, pp. 410-415.
[16] Y. Liu, B. Cao, K. Ma, and J. Fan, "Improving the classification of call center service dialogue with key utterances," Wireless Networks, vol. 27, no. 5, pp. 3395-3406, 2021.
[17] K. Poczeta, M. Płaza, T. Michno, M. Krechowicz, and M. Zawadzki, "A multi-label text message classification method designed for applications in call/contact centre systems," Applied Soft Computing, vol. 145, p. 110562, 2023.
[18] C. Liu, P. Wang, J. Xu, Z. Li, and J. Ye, "Automatic dialogue summary generation for customer service," in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 1957-1965.
[19] H. Zhang, L. Xiao, W. Chen, Y. Wang, and Y. Jin, "Multi-task label embedding for text classification," arXiv preprint arXiv:1710.07210, 2017.
[20] T. Wu, R. Su, and B. Juang, "A label-aware BERT attention network for zero-shot multi-intent detection in spoken language understanding," in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp. 4884-4896.
[21] I. Casanueva, I. Vulić, G. Spithourakis, and P. Budzianowski, "NLU++: A multi-label, slot-rich, generalisable dataset for natural language understanding in task-oriented dialogue," arXiv preprint arXiv:2204.13021, 2022.
[22] 李彥伯, "基於BERT實現動態主題與文本之多標籤分類," 國立暨南國際大學學位論文, 2023.
[23] 林立中, "探索多標籤分類的應用, " 國立臺灣大學學位論文, 2022.
[24] 蘇成恩, "以憂鬱症病患為對象之多標籤對話生成," 國立臺灣大學學位論文, 2022.
[25] 林英延, "多標籤分類方法應用於問答系統," 國立中興大學學位論文, 2022.
[26] H. Liang, X. Sun, Y. Sun, and Y. Gao, "Text feature extraction based on deep learning: a review," EURASIP journal on wireless communications and networking, vol. 2017, no. 1, pp. 1-12, 2017.
[27] F. Shiri, T. Perumal, N. Mustapha, and R. Mohamed, "A comprehensive overview and comparative analysis on deep learning models: CNN, RNN, LSTM, GRU," arXiv preprint arXiv:2305.17473, 2023.
[28] H. Mohammed, E. Dogdu, A. Görür and R. Choupani, "Multi-label classification of text documents using deep learning," in 2020 IEEE International Conference on Big Data (Big Data), 2020, pp. 4681-4689.
[29] A. Al-Qerem, M. Raja, S. Taqatqa, and M. Sara, "Utilizing Deep Learning Models (RNN, LSTM, CNN-LSTM, and Bi-LSTM) for Arabic Text Classification," in Artificial Intelligence-Augmented Digital Twins: Transforming Industrial Operations for Innovation and Sustainability, Cham: Springer Nature Switzerland, pp. 287-301, 2024.
[30] P. Liu, X. Qiu, and X. Huang, "Recurrent neural network for text classification with multi-task learning," arXiv preprint arXiv:1605.05101, 2016.
[31] 郭建宏, "基於Bi-LSTM演算法加速中文自然語言語意分析系統, " 國立臺北科技大學學位論文, 2021.
[32] E. Ahmadzadeh, H. Kim, O. Jeong, N. Kim, and I. Moon, "A deep bidirectional LSTM-GRU network model for automated ciphertext classification," IEEE access, vol. 10, pp. 3228-3237, 2022.
[33] M. Zulqarnain, R. Ghazali, M. Ghouse, and M. Mushtaq, "Efficient processing of GRU based on word embedding for text classification," JOIV: International Journal on Informatics Visualization, vol. 3, no. 4, pp. 377-383, 2019.
[34] A. Shewalkar, "Graduate School," Doctoral dissertation, North Dakota State University, 2018.
[35] J. Devlin, M. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
[36] C. Xu, W. Zhou, T. Ge, F. Wei, and M. Zhou, "Bert-of-theseus: Compressing bert by progressive module replacing," arXiv preprint arXiv:2002.02925, 2020.
[37] P. Rust, J. Pfeiffer, I. Vulić, S. Ruder, and I. Gurevych, "How good is your tokenizer? on the monolingual performance of multilingual language models," arXiv preprint arXiv:2012.15613, 2020.
[38] S. Choo and W. Kim, "A study on the evaluation of tokenizer performance in natural language processing," Applied Artificial Intelligence, vol. 37, no. 1, p. 2175112, 2023.
[39] H. Yang, "Bert meets chinese word segmentation," arXiv preprint arXiv:1909.09292, 2019.
[40] Y. Lai, Y. Liu, Y. Feng, S. Huang, and D. Zhao, "Lattice-BERT: leveraging multi-granularity representations in Chinese pre-trained language models," arXiv preprint arXiv:2104.07204, 2021.
[41] Y. Tian, "Multi-label Text Classification Combining BERT and Bi-GRU Based on the Attention Mechanism," Journal of Network Intelligence, vol. 8, no. 1, pp. 168-180, 2023.
[42] A. Tarekegn, M. Giacobini, and K. Michalak, "A review of methods for imbalanced multi-label classification," Pattern Recognition, vol. 118, p. 107965, 2021.
[43] J. Johnson and T. Khoshgoftaar, "Survey on deep learning with class imbalance," Journal of Big Data, vol. 6, no. 1, pp. 1-54, 2019.
[44] Y. Yang, Y. Lin, H. Chu, and H. Lin, "Deep learning with a rethinking structure for multi-label classification," in Asian Conference on Machine Learning, 2019, pp. 125-140.
[45] A. Blanco, A. Casillas, A. Perez, and A. de Ilarraza, "Multi-label clinical document classification: Impact of label-density," Expert Systems with Applications, vol. 138, p. 112835, 2019.
[46] A. Pal, M. Selvakumar, and M. Sankarasubbu, "Multi-label text classification using attention-based graph neural network," arXiv preprint arXiv:2003.11644, 2020.
[47] Y. Hou, Y. Lai, Y. Wu, W. Che, and T. Liu, "Few-shot learning for multi-label intent detection," in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, vol. 35, no. 14, pp. 13036-13044.
[48] R. Wang and X. Dai, "Contrastive learning-enhanced nearest neighbor mechanism for multi-label text classification," in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2022, pp. 672-679.
[49] 林育任, "多標籤分類中對稀有標籤的閥值調整策略之討論,"國立臺灣大學學位論文, 2023.
[50] Y. Chen, Y. Chen, and C. Hsu, "G-TransRec: A Transformer-Based Next-Item Recommendation With Time Prediction," in IEEE Transactions on Computational Social Systems, 2024, pp. 4-5.
[51] B. Ghojogh and A. Ghodsi, "Attention mechanism, transformers, BERT, and GPT: Tutorial and Survey," 2020.
[52] K. Nakamura and B. W. Hong, "Adaptive weight decay for deep neural networks," IEEE Access, vol. 7, pp. 118857-118865, 2019.
[53] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer and V. Stoyanov, "Roberta: A robustly optimized bert pretraining approach," arXiv preprint arXiv:1907.11692, 2019.
[54] "CKIP Lab 中文詞知識庫小組, " 中央研究院. https://ckip.iis.sinica.edu.tw/project/ws. Accessed: Mar. 17, 2024.
[55] 吳澤鑫, "深度神經網路於中文斷詞之研究," 學位論文, 國立臺灣科技大學學位論文, 2023.
[56] 徐子杰, "基於Stacking與Transformer的中文斷詞模型之研究," 國立臺灣科技大學學位論文, 2023. |