博碩士論文 108522073 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:15 、訪客IP:13.59.218.147
姓名 邱妤靜(Yu-Ching Chiu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 使用多任務學習改善使用者意圖分類
(Improve User Intent Classification Using Multitask Learning)
相關論文
★ A Real-time Embedding Increasing for Session-based Recommendation with Graph Neural Networks★ 基於主診斷的訓練目標修改用於出院病摘之十代國際疾病分類任務
★ 混合式心臟疾病危險因子與其病程辨識 於電子病歷之研究★ 基於 PowerDesigner 規範需求分析產出之快速導入方法
★ 社群論壇之問題檢索★ 非監督式歷史文本事件類型識別──以《明實錄》中之衛所事件為例
★ 應用自然語言處理技術分析文學小說角色 之關係:以互動視覺化呈現★ 基於生醫文本擷取功能性層級之生物學表徵語言敘述:由主成分分析發想之K近鄰算法
★ 基於分類系統建立文章表示向量應用於跨語言線上百科連結★ Code-Mixing Language Model for Sentiment Analysis in Code-Mixing Data
★ 藉由加入多重語音辨識結果來改善對話狀態追蹤★ 對話系統應用於中文線上客服助理:以電信領域為例
★ 應用遞歸神經網路於適當的時機回答問題★ 使用轉移學習來改進針對命名實體音譯的樞軸語言方法
★ 基於歷史資訊向量與主題專精程度向量應用於尋找社群問答網站中專家★ 使用YMCL模型改善使用者意圖分類成效
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 問答系統在自然語言處理中越來越流行,尤其是在智慧工廠的環境中。在這項研究中,我們讓使用者經歷一個多步驟的機器人組裝任務來模擬工廠環境。我們設計了一個互動式對話系統,通過對使用者在組裝過程中的意圖進行分類來處理使用者的問題。為了解决使用者在組裝過程中,相同的問題在不同的步驟中應給出不同的回答,我們的系統將使用者的問題與使用者當前影像相結合,以更好地掌握使用者的問題並分析他們的意圖。此外,我們引入(a) 對話行為分類和(b) 判別當前問句使否與當前步驟相關兩項相任務作為輔助任務來支持我們的使用者意圖分類。透過多模態和多任務學習,提高了使用者意圖分
類的準確率。
摘要(英) Question answering systems are becoming increasingly popular in Natural Language Processing, especially when applied in smart factory settings. In this study, we simulated the manufacturing setting by having user go through a multiple stage robot assembling task. We designed an interactive dialogue system to deal with user question by classifying their intent during the assembly process. To tackle the issue that users required different answers for questions sharing the same intent since they are present in different stages of the work process, our system incorporates user utterances with real time video feed to better situate user questions and analyze their intent. Also, we introduce dialogue act classification and step-independent
classification as auxiliary tasks to support our user intent classification. With the combination of multimodality and multitask learning, our proposed model enhances the performance of intent classification.
關鍵字(中) ★ 對話系統
★ 意圖分類任務
★ 多模態
★ 多任務學習
關鍵字(英) ★ Dialogue system
★ Intent classification task
★ Multi-modality
★ Multi-task
論文目次 中文摘要 i
Abstract ii
Contents iii
List of Figures v
List of Tables vi
1 Introduction 1
1.1 Impact of Artificial intelligence in manufacturing 1
1.2 Chatbot in manufacturing 2
1.2.1 Early phase: Orientation Training 2
1.2.2 Mid phase: Assembly Line 2
1.2.3 Later phase: Repair and Maintenance 3
1.3 Meccanoid Final Assembly Task 3
2 Related Work 6
2.1 Chatbot and Dialogue System 6
2.2 Dialogue Act Classification 7
3 Method 8
3.1 Dataset 8
3.1.1 Pre-Data
Collection: Wizard-of-Oz Experiment 8
3.1.2 Post Data Collection: Crowdsourcing 10
3.1.3 Voice Data Collection 12
3.1.4 Visual Data Collection 12
3.1.5 Multimodal Dataset 13
3.1.6 Data Annotation 13
3.2 Multi-modal Intent Classification Model with Multi-Task Learning 14
4 Experiment and Result 16
4.1 Different ways of training and different combination of tasks 16
4.2 Analysis on Single Task Classification and Multi-task
Classification 17
4.3 Different Visual Context Capture Method 22
4.4 Analysis on Real Voice Data (RVD) 23
4.5 Analysis on Model Robustness 24
5 Conclusion 26
Bibliography 27
參考文獻 [1] F. Meziane, S. Vadera, K. Kobbacy, and N. Proudlove, “Intelligent systems in man- ufacturing: current developments and future prospects,” Integrated Manufacturing Systems, 2000.
[2] X. Yao, J. Zhou, J. Zhang, and C. R. Boër, “From intelligent manufacturing to smart manufacturing for industry 4.0 driven by next generation artificial intelligence and further on,” in 2017 5th international conference on enterprise systems (ES). IEEE, 2017, pp. 311–318.
[3] L. Lattanzi, C. Cristalli, D. Massa, S. Boria, P. Lépine, and M. Pellicciari, “Geomet- rical calibration of a 6-axis robotic arm for high accuracy manufacturing task,” The International Journal of Advanced Manufacturing Technology, vol. 111, no. 7, pp. 1813–1829, 2020.
[4] P. R. Jeyaraj and E. R. S. Nadar, “Computer vision for automatic detection and classi- fication of fabric defect employing deep learning algorithm,” International Journal of Clothing Science and Technology, 2019.
[5] O. Badmos, A. Kopp, T. Bernthaler, and G. Schneider, “Image-based defect detection in lithium-ion battery electrode using convolutional neural networks,” Journal of
Intelligent Manufacturing, vol. 31, no. 4, pp. 885–897, 2020.
[6] D. Griol and Z. Callejas, “An architecture to develop multimodal educative appli- cations with chatbots,” International Journal of Advanced Robotic Systems, vol. 10, no. 3, p. 175, 2013.
[7] A. Dersingh, P. Srisakulpinyo, S. Rakkarn, and P. Boonkanit, “Chatbot and visual management in production process,” , pp. 274–277, 2017.
[8] T.-Y. Chen, “Improve user intent classification performance using ymcl model,” Master’s thesis, National Central University, 2020.
[9] H. Chen, X. Liu, D. Yin, and J. Tang, “A survey on dialogue systems: Recent advances and new frontiers,” Acm Sigkdd Explorations Newsletter, vol. 19, no. 2, pp. 25–35, 2017.
[10] E. Hosseini-Asl, B. McCann, C.-S. Wu, S. Yavuz, and R. Socher, “A simple language model for task-oriented dialogue,” arXiv preprint arXiv:2005.00796, 2020.
[11] B. Peng, C. Li, J. Li, S. Shayandeh, L. Liden, and J. Gao, “Soloist: Few-shot task-oriented dialog with a single pretrained auto-regressive model,” arXiv preprint arXiv:2005.05298, 2020.
[12] D. Adiwardana, M.-T. Luong, D. R. So, J. Hall, N. Fiedel, R. Thoppilan, Z. Yang, A. Kulshreshtha, G. Nemade, Y. Lu et al., “Towards a human-like open-domain chatbot,” arXiv preprint arXiv:2001.09977, 2020.
[13] S. Roller, E. Dinan, N. Goyal, D. Ju, M. Williamson, Y. Liu, J. Xu, M. Ott, K. Shuster, E. M. Smith et al., “Recipes for building an open-domain chatbot,” arXiv preprint arXiv:2004.13637, 2020.
[14] N. Reithinger and M. Klesen, “Dialogue act classification using language models,” in Fifth European Conference on Speech Communication and Technology, 1997.
[15] A. Stolcke, E. Shriberg, R. Bates, N. Coccaro, D. Jurafsky, R. Martin, M. Meteer, K. Ries, P. Taylor, C. Van Ess-Dykema et al., “Dialog act modeling for conversa- tional speech,” in AAAI Spring Symposium on Applying Machine Learning to Dis­ course Processing, 1998, pp. 98–105.
[16] A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. V. Ess-Dykema, and M. Meteer, “Dialogue act modeling for auto- matic tagging and recognition of conversational speech,” Computational linguistics, vol. 26, no. 3, pp. 339–373, 2000.
[17] R. Fernandez and R. W. Picard, “Dialog act classification from prosodic features using support vector machines,” in Speech Prosody 2002, International Conference, 2002.
[18] S. Grau, E. Sanchis, M. J. Castro, and D. Vilar, “Dialogue act classification using a bayesian approach,” in 9th Conference Speech and Computer, 2004.
[19] R. Li, C. Lin, M. Collinson, X. Li, and G. Chen, “A dual-attention hierar- chical recurrent neural network for dialogue act classification,” arXiv preprint arXiv:1810.09154, 2018.
[20] T. Saha, S. Srivastava, M. Firdaus, S. Saha, A. Ekbal, and P. Bhattacharyya, “Ex- ploring machine learning and deep learning frameworks for task-oriented dialogue act classification,” in 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019, pp. 1–8.
[21] T. Saha, A. Patra, S. Saha, and P. Bhattacharyya, “Towards emotion-aided multi- modal dialogue act classification,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 4361–4372.
[22] N. Dahlbäck, A. Jönsson, and L. Ahrenberg, “Wizard of oz studies—why and how,” Knowledge­based systems, vol. 6, no. 4, pp. 258–266, 1993.
[23] S. Paul, R. Goel, and D. Hakkani-Tür, “Towards universal dialogue act tagging for task-oriented dialogues,” arXiv preprint arXiv:1907.03020, 2019.
[24] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
指導教授 蔡宗翰(Richard Tzong-Han Tsai) 審核日期 2021-9-9
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明