博碩士論文 108453014 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:32 、訪客IP:3.142.251.204
姓名 章鴻琳(Hung-Ling Chang)  查詢紙本館藏   畢業系所 資訊管理學系在職專班
論文名稱 客製化任務導向於電信場域之智慧文字客服
(An Intelligent Chatbot with the Customized Task-based Sequence to Sequence Model)
相關論文
★ 台灣50走勢分析:以多重長短期記憶模型架構為基礎之預測★ 以多重遞迴歸神經網路模型為基礎之黃金價格預測分析
★ 增量學習用於工業4.0瑕疵檢測★ 遞回歸神經網路於電腦零組件銷售價格預測之研究
★ 長短期記憶神經網路於釣魚網站預測之研究★ 基於深度學習辨識跳頻信號之研究
★ Opinion Leader Discovery in Dynamic Social Networks★ 深度學習模型於工業4.0之機台虛擬量測應用
★ A Novel NMF-Based Movie Recommendation with Time Decay★ 以類別為基礎sequence-to-sequence模型之POI旅遊行程推薦
★ A DQN-Based Reinforcement Learning Model for Neural Network Architecture Search★ Neural Network Architecture Optimization Based on Virtual Reward Reinforcement Learning
★ 生成式對抗網路架構搜尋★ 以漸進式基因演算法實現神經網路架構搜尋最佳化
★ Enhanced Model Agnostic Meta Learning with Meta Gradient Memory★ 遞迴類神經網路結合先期工業廢水指標之股價預測研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-8-1以後開放)
摘要(中) 自然語言處理的技術 (Natural Language Processing , NLP) 在人工智慧應用上有很重要的地位,應用範圍也非常的廣泛包括語言的互相翻譯、文章給予適當的標籤、利用書寫方式辨識作者、對話機器人。一個有智慧的可以協助回答客戶問題的機器人,不但可以協助降低企業的成本,也可以解決7 X 24 小時不間斷的服務品質,但現今以條件式建構的知識庫應答機器人,僅能滿足六成的客戶問題,如何在提高應答率與準確率,也是本研究想要提出的一種模型。
本研究以電信業為例,由於電信業的客服必須具備多種知識領域包括網路問題,手機問題,促案問題,帳單問題,催收問題,林林總總,本實驗利用兩階段對話學習框架,並在第二階段加入問題任務條件與客人屬性為回答內容,達成提升應答率與準確率。
實驗的結果經過兩種文字生成評估方法驗證,加入問題分類與客戶屬性後訓練的模型是可以提升回應的準確率,藉此提高應答率。
摘要(英) Natural language processing technology (NLP) has a very important position in the application of artificial intelligence, and its application range is also very extensive, including mutual translation of languages, giving appropriate tags to articles, using writing methods to identify authors, and dialogue robots. A smart robot that can assist in answering customer questions cannot only help reduce the cost of the enterprise, but also solve the 7 X 24 hours uninterrupted service quality. However, the knowledge base answering robot constructed with conditional methods can only meet 60% of the requirements. How to improve the response rate and accuracy of the customer’s questions is also a model that this research wants to put forward.
This study takes the telecommunications industry as an example. Since the customer service of the telecommunications industry must have a variety of knowledge areas including network issues, mobile phone issues, contract issues, billing issues, collection issues, and there are numerous problems, this experiment uses a two-stage dialogue learning framework, and In the second stage, the question task conditions and guest attributes are added as the answer content to achieve an improved response rate and accuracy rate.
The results of the experiment have been verified by two text generation evaluation methods. The model trained after adding question classification and customer attributes can improve the accuracy of the response, thereby increasing the response rate.
關鍵字(中) ★ 對話機器人
★ 詞嵌入
★ 遞歸神經網路模型
★ 注意力序列對序列
★ 客製化對話
關鍵字(英) ★ dialogue robot
★ word embedding
★ GRU
★ attention sequence to sequence
★ customized dialogue
論文目次 目錄
中文摘要 i
Abstract ii
誌謝 iii
目錄 iv
圖目錄 vi
表目錄 vii
Chapter 1 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 2
1.3 研究貢獻 3
1.4 論文架構 4
Chapter 2 文獻探討 5
2.1 文字處理 5
2.2 對話機器人模型 6
2.3 可控制的生成回應 7
Chapter 3 研究方法 9
3.1 資料預處理 10
3.2 通用回應的模型 16
3.3 以問題分類為基礎的客製化注意力序列到序列模型 22
Chapter 4 研究結果分析 24
4.1 資料集 24
4.2 實驗模型評估 25
4.2.1 BLEU (Bi-Lingual Evaluation Understudy) 雙語代替評估法 26
4.2.2 Rouge (Recall-Oriented Understudy for Gisting Evaluation) 28
4.3 實驗個案研究 30
Chapter 5 結論 35
5.1 研究結果 35
5.2 研究發現 36
5.3 研究限制 36
5.4 未來研究與建議 37
參考文獻 38
參考文獻 Afsheen, S., & Khan, S. (2012). Determinants of Customer Satisfaction in Telecom Industry A Study of Telecom industry Peshawar KPK Pakistan. 2, 12833–12840.
Bordes, A., Boureau, Y.-L., & Weston, J. (2017). Learning End-to-End Goal-Oriented Dialog. The 5th International Conference on Learning Representation (ICLR 2017)
Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1724–1734.
Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, December 2014
Hao, L., & Hao, L. (2008). Automatic Identification of Stop Words in Chinese Text Classification. 2008 International Conference on Computer Science and Software Engineering, 1, 718–722.
Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735–1780.
Jozefowicz, R., Zaremba, W. & Sutskever, I.. (2015). An Empirical Exploration of Recurrent Network Architectures. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:2342-2350.
Keskar, Nitish & McCann, Bryan & Varshney, Lav & Xiong, Caiming & Socher, Richard. (2019). CTRL: A Conditional Transformer Language Model for Controllable Generation. ArXiv:1909.05858 [Cs]
Khandelwal, S., Lecouteux, B., & Besacier, L. (2016). COMPARING GRU AND LSTM FOR AUTOMATIC SPEECH RECOGNITION [Research Report]. LIG.
Li, J., Galley, M., Brockett, C., Spithourakis, G., Gao, J., & Dolan, B. (2016). A Persona-Based Neural Conversation Model. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 994–1003.
Lin, C.-Y. (2004). ROUGE: A Package for Automatic Evaluation of Summaries. Text Summarization Branches Out, 74–81.
Madotto, A., Lin, Z., Wu, C.-S., & Fung, P. (2019). Personalizing Dialogue Agents via Meta-Learning. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 5454–5459.
Mei, H., Bansal, M., & Walter, M. (2017). Coherent Dialogue with Attention-Based Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Article 1.
Mikolov, T., Chen, K., Corrado, G. & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. CoRR, abs/1301.3781.
Papineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). Bleu: A Method for Automatic Evaluation of Machine Translation. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311–318.
Raffel, C. & Ellis, D. P. W. (2015). Feed-Forward Networks with Attention Can Solve Some Long-Term Memory Problems.. CoRR, abs/1512.08756.
Ritter, A., Cherry, C., & Dolan, W. B. (2011). Data-Driven Response Generation in Social Media. Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, 583–593.
See, A., Roller, S., Kiela, D., & Weston, J. (2019). What makes a good conversation? How controllable attributes affect human judgments. (NAACL 2019)
Shekhar, R., & Jawahar, C. V. (2012). Word Image Retrieval Using Bag of Visual Words. 2012 10th IAPR International Workshop on Document Analysis Systems, 297–301.
Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., & Dolan, B. (2015). A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 196–205.
Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems (Vol. 27). Curran Associates, Inc.
Webster, J. J., & Kit, C. (1992). Tokenization as the Initial Phase in NLP. COLING 1992 Volume 4: The 14th International Conference on Computational Linguistics. COLING 1992.
Zhou, H., Huang, M., Zhang, T., Zhu, X., & Liu, B. (2018). Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), Article 1.
基於改進的正向最大匹配中文分詞算法研究—《貴州大學學報(自然科學版)》2011年05期. (n.d.). Retrieved March 13, 2021
指導教授 陳以錚(Yi-Cheng Chen) 審核日期 2021-8-4
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明