博碩士論文 108423010 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:103 、訪客IP:3.142.130.242
姓名 陳威良(Wei-Liang Chen)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱 基於變換隱空間之風格化對話生成
(Transfer Latent Spaces for Stylized Dialogue Generation)
相關論文
★ 多重標籤文本分類之實證研究 : word embedding 與傳統技術之比較★ 基於圖神經網路之網路協定關聯分析
★ 學習模態間及模態內之共用表示式★ Hierarchical Classification and Regression with Feature Selection
★ 病徵應用於病患自撰日誌之情緒分析★ 基於注意力機制的開放式對話系統
★ 針對特定領域任務—基於常識的BERT模型之應用★ 基於社群媒體使用者之硬體設備差異分析文本情緒強烈程度
★ 機器學習與特徵工程用於虛擬貨幣異常交易監控之成效討論★ 捷運轉轍器應用長短期記憶網路與機器學習實現最佳維保時間提醒
★ 基於半監督式學習的網路流量分類★ ERP日誌分析-以A公司為例
★ 企業資訊安全防護:網路封包蒐集分析與網路行為之探索性研究★ 資料探勘技術在顧客關係管理之應用─以C銀行數位存款為例
★ 人臉圖片生成與增益之可用性與效率探討分析★ 人工合成文本之資料增益於不平衡文字分類問題
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-7-21以後開放)
摘要(中) 現今對話生成技術已經展現出強大的潛力,不過現今的對話系統通產出的回覆通
常平淡且一般。直接讓對話系統產出附有風格的回覆,是一個讓對話系統能夠生成多
元化回覆的解決方法。在這篇論文中我們提出附帶風格的對話生成方法,在生成對話
的同時做風格轉換,可以達到一個問句有多種風格的回覆,目的是為了讓機器可以在
不同對話場景用合適的風格給予回應,這項任務也可以說是有效得結合對話生成任務
及風格轉換任務,於是我們不僅重視回覆句能夠適當回覆,更強調能夠展現出強的烈
風格。
由於資料特性的關係,對話資料集通常是成對的資料,一個上文有對應的下文,
而風格文本的資料集通常是不成對的,於是我們利用監督式學習的方式建構對話模
型,用非監督式學習的方式建構風格轉換模型,並且使兩個模型共用一個解譯器,組
合成一個多工模型。我們提出使用輕量級的深度神經網路模型,將對話生成模型的潛
在空間橋接到附帶風格的潛在空間,使模型插入不同風格的深度神經網路模型,就能
生成對應風格的回覆,讓我們提出的模型生成出特別生動且令人印象深刻的回覆。文
中我們展示成功將對話潛在空間橋接到風格轉換的潛在空間的成果,並且我們利用多
個自動化評估指標搭配人工評估,多方面比較我們提出的模型與基準模型的成效,結
果指出,我們的模型能夠生成附帶強烈風格的回覆,風格強度優於基準模型許多,語
句的流暢度也優於基準模型,並且維持與基準模型相當的回覆適當性。不僅如此我們
希望我的的模型能有廣泛的適用性,我們利用兩個外部的對話資料集,測試我們模型
的適用性,結果顯示用於日常對話的文本有不錯的適用性。
摘要(英) Dialogue generation technology has shown great potential, but currently dialogue systems
usually generate plain and general responses. Directly allowing the dialogue system to produce
stylized responses a solution that allow the dialogue systems to generate diversified responses.
In this study, we propose a dialogue generation method with styles. While generating dialogues,
we can do style conversion to achieve multiple styles of responses to a question. The purpose
is to allow the machine to respond with appropriate styles in different dialogue scenarios. This
task can also be said to be an effective combination of dialogue generation tasks and style
conversion tasks, so we not only attach importance to reply sentences to be able to respond
appropriately, but also emphasize the ability to reply with high style intensity.
With the data characteristics, the dialogue dataset is usually parallel data, one context has
a corresponding response, and the style text dataset is usually non-parallel, so we use supervised
learning to construct the dialogue generation model, using unsupervised learning method
constructs a style transfer model, and makes the two models shared a decoder and combine
them into a hybrid model. We propose using lightweight deep neural network models to bridge
the latent spaces of dialogue response generation model and style transfer model. This structure
allows the model to generate many different impressive style sentences. In chapter four, we
show the results of successfully bridging the latent space of dialogue generation to the latent
space of style transfer, and we use multiple auto evaluation metrics and human evaluation to
compare the effectiveness of our proposed model with the benchmark model in many aspects.
The results indicate that the style intensity and the fluency of sentences are much better than
that of the benchmark model, and the appropriateness of the responses is maintained
comparable to that of the benchmark model. Not only that, we use two external dialogue
datasets to test the applicability of our model. The results show that the text used for daily
dialogue has good applicability.
關鍵字(中) ★ 對話生成
★ 文字風格轉換
★ 深度學習
★ 多任務學習
關鍵字(英) ★ Dialogue generation
★ text style transfer
★ deep neural network
★ multi-task learning
論文目次 摘要 i
Abstract ii
Acknowledgement iii
Table of Contents iv
List of Tables vi
List of Figures vii
1 Introduction 1
1.1 Overviews 1
1.2 Motivation 2
1.3 Objectives 3
1.4 Thesis Organization 3
2 Related Works 4
2.1 Dialogue Generation 4
2.2 Text style transfer 8
2.2.1 With fully unlabeled data 8
2.2.2 With style-labeled data 8
2.2.3 With parallel data 10
2.3 Fusion of Supervised and Unsupervised Learning 14
2.4 Evaluation Metrics 15
2.4.1 Dialogue generation evaluation metrics 15
2.4.2 Style transfer evaluation metrics 18
2.5 Chapter Summary 20
3 Methodology 21
3.1 Model Overview 21
3.2 Model Architecture 23
3.3 Training Phase 25
3.3.1 Hybrid model 25
3.3.2 Plug-in DNN 26
3.4 Experiments 27
3.4.1 Datasets 27
3.4.2 Auto Evaluation 30
3.4.3 Human Evaluation 31
3.5 Experiment Settings 32
3.5.1 Preprocessing 32
3.5.2 Model settings 33
3.5.3 Proposed Experiment 33
4 Experiment Results 35
4.1 Experiment 1 – Effectiveness of Stylize Dialogue Generation Methods 35
4.1.1 Plug-in DNN 35
4.1.2 Evaluation 38
4.1.3 Summary of Experiment 1 42
4.2 Experiment 2 – Applicability of Stylized Dialogue Generation Model 44
4.2.1 Applying to Twitter 44
4.2.2 Applying to Movie Line 45
4.2.3 Summary for Experiment 2 46
5 Conclusion 47
5.1 Overall Summary 47
5.2 Contributions 48
5.3 Study limitations 48
5.4 Future Research 48
6 Reference 49
7 Appendixes 53
7.1 Experiment 1 53
參考文獻 Bahdanau, D., Cho, K., Bengio, Y., 2016. Neural Machine Translation by Jointly Learning to Align and Translate. Presented at the ICLR 2015.
Banerjee, S., Lavie, A., 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments, in: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Association for Computational Linguistics, Ann Arbor, Michigan, pp. 65–72.
Baur, C., Wiestler, B., Albarqouni, S., Navab, N., 2018. Fusing Unsupervised and Supervised Deep Learning for White Matter Lesion Segmentation. Presented at the International Conference on Medical Imaging with Deep Learning -- Full Paper Track.
Cuayáhuitl, H., 2016. SimpleDS: A Simple Deep Reinforcement Learning Dialogue System. Presented at the.International Workshop on Spoken Dialogue Systems (IWSDS), 2016.
Duan, Y., Xu, C., Pei, J., Han, J., Li, C., 2020. Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2020, Association for Computational Linguistics, Online, pp. 253–262. https://doi.org/10.18653/v1/2020.acl-main.23
El Asri, L., Schulz, H., Sharma, S., Zumer, J., Harris, J., Fine, E., Mehrotra, R., Suleman, K., 2017. Frames: a corpus for adding memory to goal-oriented dialogue systems, in: Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. Association for Computational Linguistics, Saarbrücken, Germany, pp. 207–219. https://doi.org/10.18653/v1/W17-5526
Følstad, A., Skjuve, M., 2019. Chatbots for customer service: user experience and motivation, in: Proceedings of the 1st International Conference on Conversational User Interfaces - CUI ’19. Presented at the the 1st International Conference, ACM Press, Dublin, Ireland, pp. 1–9. https://doi.org/10.1145/3342775.3342784
Fu, Z., Tan, X., Peng, N., Zhao, D., Yan, R., 2018. Style Transfer in Text: Exploration and Evaluation, in: AAAI.
Gao, X., Zhang, Y., Lee, S., Galley, M., Brockett, C., Gao, J., Dolan, B., 2019. Structuring Latent Spaces for Stylized Response Generation, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Presented at the EMNLP-IJCNLP 2019, Association for Computational Linguistics, Hong Kong, China, pp. 1814–1823. https://doi.org/10.18653/v1/D19-1190
Gong, H., Bhat, S., Wu, L., Xiong, J., Hwu, W., 2019. Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Presented at the NAACL-HLT 2019, Association for Computational Linguistics, Minneapolis, Minnesota, pp. 3168–3180. https://doi.org/10.18653/v1/N19-1320
Karem, F., Dhibi, M., Martin, A., 2012. Combination of Supervised and Unsupervised Classification Using the Theory of Belief Functions. https://doi.org/10.1007/978-3-642-29461-7_10
Li, J., Galley, M., Brockett, C., Spithourakis, G., Gao, J., Dolan, B., 2016a. A Persona-Based Neural Conversation Model, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Presented at the ACL 2016, Association for Computational Linguistics, Berlin, Germany, pp. 994–1003. https://doi.org/10.18653/v1/P16-1094
Li, J., Monroe, W., Ritter, A., Jurafsky, D., Galley, M., Gao, J., 2016b. Deep Reinforcement Learning for Dialogue Generation, in: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2016, Association for Computational Linguistics, Austin, Texas, pp. 1192–1202. https://doi.org/10.18653/v1/D16-1127
Li, X., Chen, Y.-N., Li, L., Gao, J., Celikyilmaz, A., 2017. End-to-End Task-Completion Neural Dialogue Systems, in: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Presented at the IJCNLP 2017, Asian Federation of Natural Language Processing, Taipei, Taiwan, pp. 733–743.
Lin, C.-Y., 2004. ROUGE: A Package for Automatic Evaluation of Summaries, in: Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, pp. 74–81.
Liu, C.-W., Lowe, R., Serban, I., Noseworthy, M., Charlin, L., Pineau, J., 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation, in: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2016, Association for Computational Linguistics, Austin, Texas, pp. 2122–2132. https://doi.org/10.18653/v1/D16-1230
Madaan, A., Setlur, A., Parekh, T., Poczos, B., Neubig, G., Yang, Y., Salakhutdinov, R., Black, A.W., Prabhumoye, S., 2020. Politeness Transfer: A Tag and Generate Approach, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2020, Association for Computational Linguistics, Online, pp. 1869–1881. https://doi.org/10.18653/v1/2020.acl-main.169
Mir, R., Felbo, B., Obradovich, N., Rahwan, I., 2019. Evaluating Style Transfer for Text, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Presented at the NAACL-HLT 2019, Association for Computational Linguistics, Minneapolis, Minnesota, pp. 495–504. https://doi.org/10.18653/v1/N19-1049
Niederhoffer, K.G., Pennebaker, J.W., 2002. Linguistic Style Matching in Social Interaction. Journal of Language and Social Psychology 21, 337–360. https://doi.org/10.1177/026192702237953
Nogueira dos Santos, C., Melnyk, I., Padhi, I., 2018. Fighting Offensive Language on Social Media with Unsupervised Text Style Transfer, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Presented at the ACL 2018, Association for Computational Linguistics, Melbourne, Australia, pp. 189–194. https://doi.org/10.18653/v1/P18-2031
Pang, R.Y., 2019. The Daunting Task of Real-World Textual Style Transfer Auto-Evaluation. Presented at the WNGT 2019.
Papineni, K., Roukos, S., Ward, T., Zhu, W.-J., 2002. Bleu: a Method for Automatic Evaluation of Machine Translation, in: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2002, Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pp. 311–318. https://doi.org/10.3115/1073083.1073135
Rao, S., Tetreault, J., 2018. Dear Sir or Madam, May I Introduce the GYAFC Dataset: Corpus, Benchmarks and Metrics for Formality Style Transfer, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Presented at the NAACL-HLT 2018, Association for Computational Linguistics, New Orleans, Louisiana, pp. 129–140. https://doi.org/10.18653/v1/N18-1012
Ritter, A., Cherry, C., Dolan, W.B., 2011. Data-Driven Response Generation in Social Media, in: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2011, Association for Computational Linguistics, Edinburgh, Scotland, UK., pp. 583–593.
Rus, V., Lintean, M., 2012. A Comparison of Greedy and Optimal Assessment of Natural Language Student Input Using Word-to-Word Similarity Metrics, in: Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. Association for Computational Linguistics, Montréal, Canada, pp. 157–162.
Serban, I., Sordoni, A., Bengio, Y., Courville, A.C., Pineau, J., 2016. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models, in: AAAI.
Sharma, S., Asri, L.E., Schulz, H., Zumer, J., 2017. Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation. ArXiv170609799 Cs.
Shen, T., Lei, T., Barzilay, R., Jaakkola, T., 2017. Style transfer from non-parallel text by cross-alignment, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17. Curran Associates Inc., Red Hook, NY, USA, pp. 6833–6844.
Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., Dolan, B., 2015. A Neural Network Approach to Context-Sensitive Generation of Conversational Responses, in: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Presented at the NAACL-HLT 2015, Association for Computational Linguistics, Denver, Colorado, pp. 196–205. https://doi.org/10.3115/v1/N15-1020
Sutskever, I., Vinyals, O., Le, Q.V., 2014. Sequence to Sequence Learning with Neural Networks, in: Advances in Neural Information Processing Systems. Curran Associates, Inc.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I., 2017. Attention is All you Need, in: Advances in Neural Information Processing Systems. Curran Associates, Inc.
Vinyals, O., Le, Q., 2015. A Neural Conversational Model. Presented at the ICML Deep Learning 2015.
Wang, K., Hua, H., Wan, X., 2019. Controllable Unsupervised Text Attribute Transfer via Editing Entangled Latent Representation. Advances in Neural Information Processing Systems 32.
Wang, Y., Wu, Y., Mou, L., Li, Z., Chao, W., 2020. Formality Style Transfer with Shared Latent Space, in: Proceedings of the 28th International Conference on Computational Linguistics. Presented at the COLING 2020, International Committee on Computational Linguistics, Barcelona, Spain (Online), pp. 2236–2249.
Williams, J.D., Zweig, G., 2016. End-to-end LSTM-based dialog control optimized with supervised and reinforcement learning. ArXiv160601269 Cs.
Wu, C., Ren, X., Luo, F., Sun, X., 2019. A Hierarchical Reinforced Sequence Operation Method for Unsupervised Text Style Transfer, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2019, Association for Computational Linguistics, Florence, Italy, pp. 4873–4883. https://doi.org/10.18653/v1/P19-1482
Yang, Z., Hu, Z., Dyer, C., Xing, E.P., Berg-Kirkpatrick, T., 2018. Unsupervised Text Style Transfer using Language Models as Discriminators, in: Advances in Neural Information Processing Systems. Curran Associates, Inc.
Zhou, C., Chen, L., Liu, J., Xiao, X., Su, J., Guo, S., Wu, H., 2020. Exploring Contextual Word-level Style Relevance for Unsupervised Style Transfer, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Presented at the ACL 2020, Association for Computational Linguistics, Online, pp. 7135–7144. https://doi.org/10.18653/v1/2020.acl-main.639
指導教授 柯士文(Shih-Wen Ke) 審核日期 2021-8-23
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明