參考文獻 |
Al-Fraihat, D., Joy, M., Masa’deh, R., & Sinclair, J. (2020). Evaluating E-learning systems success: An empirical study. Computers in Human Behavior, 102, 67–86. https://doi.org/10.1016/j.chb.2019.08.004
Alshibly, H. H. (2014). Evaluating E-HRM success: A Validation of the Information Systems Success Model. International Journal of Human Resource Studies, 4(3), 107–124. https://doi.org/10.5296/ijhrs.v4i3.5929
Bahdanau, D., Cho, K., & Bengio, Y. (2016). Neural Machine Translation by Jointly Learning to Align and Translate. ArXiv:1409.0473 [Cs, Stat]. http://arxiv.org/abs/1409.0473
Bailey, J. E., & Pearson, S. W. (1983). Development of a Tool for Measuring and Analyzing Computer User Satisfaction. Management Science, 29(5), 530–545. https://doi.org/10.1287/mnsc.29.5.530
Balaban, I., Mu, E., & Divjak, B. (2013). Development of an electronic Portfolio system success model: An information systems approach. Computers & Education, 60(1), 396–411. https://doi.org/10.1016/j.compedu.2012.06.013
Beltagy, I., Lo, K., & Cohan, A. (2019). SciBERT: A Pretrained Language Model for Scientific Text. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3615–3620. https://doi.org/10.18653/v1/D19-1371
Bird, S., & Loper, E. (2004). NLTK: The Natural Language Toolkit. Proceedings of the ACL Interactive Poster and Demonstration Sessions, 214–217. https://aclanthology.org/P04-3031
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901. https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
Cao, J., & Lai, C. (2020). A Bilingual Multi-type Spam Detection Model Based on M-BERT. GLOBECOM 2020 - 2020 IEEE Global Communications Conference, 1–6. https://doi.org/10.1109/GLOBECOM42002.2020.9347970
Chang, H. H., Wang, Y.-H., & Yang, W.-Y. (2009). The impact of e-service quality, customer satisfaction and loyalty on e-marketing: Moderating effect of perceived value. Total Quality Management & Business Excellence, 20(4), 423–443. https://doi.org/10.1080/14783360902781923
David, R., Duke, J., Jain, A., Reddi, V. J., Jeffries, N., Li, J., Kreeger, N., Nappier, I., Natraj, M., Regev, S., Rhodes, R., Wang, T., & Warden, P. (2021). TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems (arXiv:2010.08678). arXiv. https://doi.org/10.48550/arXiv.2010.08678
de Araújo, A. F., & Marcacini, R. M. (2021). RE-BERT: Automatic extraction of software requirements from app reviews using BERT language model. Proceedings of the 36th Annual ACM Symposium on Applied Computing, 1321–1327. https://doi.org/10.1145/3412841.3442006
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Elman, J. L. (1990). Finding Structure in Time. Cognitive Science, 14(2), 179–211. https://doi.org/10.1207/s15516709cog1402_1
Gao, Z., Feng, A., Song, X., & Wu, X. (2019). Target-Dependent Sentiment Classification With BERT. IEEE Access, 7, 154290–154299. https://doi.org/10.1109/ACCESS.2019.2946594
García, S., Ramírez-Gallego, S., Luengo, J., Benítez, J. M., & Herrera, F. (2016). Big data preprocessing: Methods and prospects. Big Data Analytics, 1(1), 9. https://doi.org/10.1186/s41044-016-0014-0
Greff, K., Srivastava, R. K., Koutník, J., Steunebrink, B. R., & Schmidhuber, J. (2017). LSTM: A Search Space Odyssey. IEEE Transactions on Neural Networks and Learning Systems, 28(10), 2222–2232. https://doi.org/10.1109/TNNLS.2016.2582924
Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
Islam, M. T., Huda, N., Baumber, A., Shumon, R., Zaman, A., Ali, F., Hossain, R., & Sahajwalla, V. (2021). A global review of consumer behavior towards e-waste and implications for the circular economy. Journal of Cleaner Production, 316, 128297. https://doi.org/10.1016/j.jclepro.2021.128297
Joshi, A., Kale, S., Chandel, S., & Pal, D. K. (2015). Likert Scale: Explored and Explained. Current Journal of Applied Science and Technology, 396–403. https://doi.org/10.9734/BJAST/2015/14975
Karita, S., Chen, N., Hayashi, T., Hori, T., Inaguma, H., Jiang, Z., Someki, M., Soplin, N. E. Y., Yamamoto, R., Wang, X., Watanabe, S., Yoshimura, T., & Zhang, W. (2019). A Comparative Study on Transformer vs RNN in Speech Applications. 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 449–456. https://doi.org/10.1109/ASRU46091.2019.9003750
Kazhuparambil, S., & Kaushik, A. (2020). Cooking Is All About People: Comment Classification On Cookery Channels Using BERT and Classification Models (Malayalam-English Mix-Code). ArXiv:2007.04249 [Cs, Stat]. http://arxiv.org/abs/2007.04249
Kim, Y. (2014). Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1746–1751. https://doi.org/10.3115/v1/D14-1181
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J. (2020). BioBERT: A pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4), 1234–1240. https://doi.org/10.1093/bioinformatics/btz682
Li, Z., Shang, W., & Yan, M. (2016). News text classification model based on topic model. 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), 1–5. https://doi.org/10.1109/ICIS.2016.7550929
Ling, C., Yimin, L., & Lianlian, J. (2021). Fault Text Classification of Rotating Machine Based BERT. 2021 33rd Chinese Control and Decision Conference (CCDC), 6744–6750. https://doi.org/10.1109/CCDC52312.2021.9602286
Liu, C., Sheng, Y., Wei, Z., & Yang, Y.-Q. (2018). Research of Text Classification Based on Improved TF-IDF Algorithm. 2018 IEEE International Conference of Intelligent Robotic and Control Engineering (IRCE), 218–222. https://doi.org/10.1109/IRCE.2018.8492945
Liu, S., Le, F., Chakraborty, S., & Abdelzaher, T. (2021). On Exploring Attention-based Explanation for Transformer Models in Text Classification. 2021 IEEE International Conference on Big Data (Big Data), 1193–1203. https://doi.org/10.1109/BigData52589.2021.9671639
Marivate, V., & Sefara, T. (2020). Improving Short Text Classification Through Global Augmentation Methods. Machine Learning and Knowledge Extraction, 385–399. https://doi.org/10.1007/978-3-030-57321-8_21
Mekala, R. R., Irfan, A., Groen, E. C., Porter, A., & Lindvall, M. (2021). Classifying User Requirements from Online Feedback in Small Dataset Environments using Deep Learning. 2021 IEEE 29th International Requirements Engineering Conference (RE), 139–149. https://doi.org/10.1109/RE51729.2021.00020
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. ArXiv:1301.3781 [Cs]. http://arxiv.org/abs/1301.3781
Miller, G. A. (1995). WordNet: A lexical database for English. Communications of the ACM, 38(11), 39–41. https://doi.org/10.1145/219717.219748
Nguyen, D. Q., Vu, T., & Tuan Nguyen, A. (2020). BERTweet: A pre-trained language model for English Tweets. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 9–14. https://doi.org/10.18653/v1/2020.emnlp-demos.2
Palmer, A., Schneider, N., Schluter, N., Emerson, G., Herbelot, A., & Zhu, X. (2021, August). Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021). Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021). https://aclanthology.org/2021.semeval-1.0
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep Contextualized Word Representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2227–2237. https://doi.org/10.18653/v1/N18-1202
Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). SQuAD: 100,000+ Questions for Machine Comprehension of Text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2383–2392. https://doi.org/10.18653/v1/D16-1264
Stefanovic, D., Marjanovic, U., Delić, M., Culibrk, D., & Lalic, B. (2016). Assessing the success of e-government systems: An employee perspective. Information & Management, 53(6), 717–726. https://doi.org/10.1016/j.im.2016.02.007
Sun, C., Qiu, X., Xu, Y., & Huang, X. (2019). How to Fine-Tune BERT for Text Classification? Chinese Computational Linguistics, 194–206. https://doi.org/10.1007/978-3-030-32381-3_16
Taherdoost, H. (2016). Validity and Reliability of the Research Instrument; How to Test the Validation of a Questionnaire/Survey in a Research (SSRN Scholarly Paper No. 3205040). Social Science Research Network. https://doi.org/10.2139/ssrn.3205040
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is All you Need. Advances in Neural Information Processing Systems, 30. https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. (2018). GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 353–355. https://doi.org/10.18653/v1/W18-5446
William H. Delone & Ephraim R. McLean. (2003). The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19(4), 9–30. https://doi.org/10.1080/07421222.2003.11045748
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., … Rush, A. (2020). Transformers: State-of-the-Art Natural Language Processing. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 38–45. https://doi.org/10.18653/v1/2020.emnlp-demos.6
Yang, J., Dou, Y., Xu, X., Ma, Y., & Tan, Y. (2021). A BERT and Topic Model Based Approach to reviews Requirements Analysis. 2021 14th International Symposium on Computational Intelligence and Design (ISCID), 387–392. https://doi.org/10.1109/ISCID52796.2021.00094
Yang, T.-Y., Yang, Y.-T., Chen, J.-R., & Lu, C.-C. (2019). Correlation between owner brand and firm value – Case study on a private brand in Taiwan. Asia Pacific Management Review, 24(3), 232–237. https://doi.org/10.1016/j.apmrv.2018.06.002
Yang, Y., UY, M. C. S., & Huang, A. (2020). FinBERT: A Pretrained Language Model for Financial Communications. ArXiv:2006.08097 [Cs]. http://arxiv.org/abs/2006.08097
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). XLNet: Generalized Autoregressive Pretraining for Language Understanding. Advances in Neural Information Processing Systems, 32. https://papers.nips.cc/paper/2019/hash/dc6a7e655d7e5840e66733e9ee67cc69-Abstract.html
Zhou, Q., Yang, N., Wei, F., & Zhou, M. (2017). Selective Encoding for Abstractive Sentence Summarization. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1095–1104. https://doi.org/10.18653/v1/P17-1101
Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. 19–27. https://doi.org/10.1109/ICCV.2015.11 |