博碩士論文 106522104 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:47 、訪客IP:3.133.140.88
姓名 黃嘉銘(Ka Ming, Wong)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱
(A Hybrid Embedding Approach for XLM to Dialect Neural Machine Translation)
相關論文
★ A Real-time Embedding Increasing for Session-based Recommendation with Graph Neural Networks★ 基於主診斷的訓練目標修改用於出院病摘之十代國際疾病分類任務
★ 混合式心臟疾病危險因子與其病程辨識 於電子病歷之研究★ 基於 PowerDesigner 規範需求分析產出之快速導入方法
★ 社群論壇之問題檢索★ 非監督式歷史文本事件類型識別──以《明實錄》中之衛所事件為例
★ 應用自然語言處理技術分析文學小說角色 之關係:以互動視覺化呈現★ 基於生醫文本擷取功能性層級之生物學表徵語言敘述:由主成分分析發想之K近鄰算法
★ 基於分類系統建立文章表示向量應用於跨語言線上百科連結★ Code-Mixing Language Model for Sentiment Analysis in Code-Mixing Data
★ 藉由加入多重語音辨識結果來改善對話狀態追蹤★ 對話系統應用於中文線上客服助理:以電信領域為例
★ 應用遞歸神經網路於適當的時機回答問題★ 使用多任務學習改善使用者意圖分類
★ 使用轉移學習來改進針對命名實體音譯的樞軸語言方法★ 基於歷史資訊向量與主題專精程度向量應用於尋找社群問答網站中專家
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 粵語是漢語的變體。在中國南方地區得到廣泛應用。此外,它在世界各地有很多演講者。雖然粵語和普通話的詞系統和大部分詞義相同,但由於語法和用詞的不同,這兩種方言不能相互理解。因此,為這些語言創建翻譯模型是一項重要的工作。無監督神經機器翻譯是應用於這些語言的最理想方法,因為並行數據很少。在本文中,我們提出了一種方法,該方法結合了改進的跨語言語言模型,並對無監督神經機器翻譯進行了逐層注意。在我們的實驗中,我們觀察到我們提出的方法確實將粵語到中文和中文到粵語的翻譯提高了 1.088 和 0.394 BLEU 分數。此外,我們發現訓練數據的領域和質量對翻譯性能有巨大影響。來自社交網絡,尤其是論壇(LIHKG 連登)的粵語數據解析,不是用於方言翻譯的理想資源。
摘要(英) Cantonese is a variant of Chinese. It has been widely used in the southern part of China. Also, it has lots of speakers around the world. Although Cantonese and Standard Chinese share the same word system and most of the word meaning, due to the difference in grammar and use of words, these two dialects are not mutually intelligible. Therefore, creating a translation model for these languages is a significant work. Unsupervised Neural Machines Translation is the most ideal method to apply to these languages because parallel data is scarce. In this paper, we proposed a method that combined a modified cross-lingual language model and performed layer by layer attention on unsupervised neural machine translation. In our experiments, we observed that our proposed method does improve the Cantonese to Chinese and Chinese to Cantonese translation by 1.088 and 0.349 BLEU score. Also, we discovered the domain and quality of the training data has a huge impact on translation performance. Cantonese data parses from the social network, especially from forums(LIHKG 連登), is not an ideal resource to use in dialect translation.
關鍵字(中) ★ 無監督神經機器翻譯
★ 深度學習
★ 低資源
關鍵字(英) ★ Unsupervised Neural Mechine Translation
★ Deep Learning
★ Low Resource
論文目次 中文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
致謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Recurrent Neural Network . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Sequence to Sequence . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.3 Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Pretrain Language Model . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 BERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 XLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Neural Machine Translation . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 Supervised Neural Machine Translation . . . . . . . . . . . . . . 10
2.3.2 Unsupervised Neural Machine Translation . . . . . . . . . . . . . 10
2.3.3 Dialect Neural Machine Translation . . . . . . . . . . . . . . . . 11
3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Pretrained model for Unsupervised NMT . . . . . . . . . . . . . . . . . 12
3.3 Modify pretrained LM . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Layerwise Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Model Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Experiment 17
4.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Word based vs Character based . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 Results and Discussion 20
5.1 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2.1 Concatenation of Embeddings . . . . . . . . . . . . . . . . . . . 21
5.2.2 Layercoordination . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2.3 The increasing amount of Cantonese monolingual datasets . . . . 22
5.2.4 Error Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.2.5 Normalization of Words . . . . . . . . . . . . . . . . . . . . . . 26
5.2.6 Demonstration of the Translation Model . . . . . . . . . . . . . 29
6 Conclusion and Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
參考文獻 [1] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, Łukasz Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean, “Google’s neural machine translation system: Bridging the gap between human and machine translation,” 2016.
[2] Y. Wan, B. Yang, D. F. Wong, L. S. Chao, H. Du, and B. C. H. Ao, “Unsupervised neural dialect translation with commonality and diversity modeling,” 2019.
[3] T. He, X. Tan, Y. Xia, D. He, T. Qin, Z. Chen, and T.Y. Liu, “Layerwise coordination between encoder and decoder for neural machine translation,” in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, Eds., vol. 31. Curran Associates, Inc., 2018. [Online]. Available: https://proceedings.neurips.cc/paper/2018/file/4fb8a7a22a82c80f2c26fe6c1e0dcbb3Paper.pdf
[4] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” 2014.
[5] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” 2016.
[6] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2017.
[7] J. Devlin, M.W. Chang, K. Lee, and K. Toutanova, “Bert: Pretraining of deep bidirectional transformers for language understanding,” 2019.
[8] G. Lample and A. Conneau, “Crosslingual language model pretraining,” 2019.
[9] N. Kalchbrenner and P. Blunsom, “Recurrent continuous translation models,” in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, Washington, USA: Association for Computational Linguistics, Oct. 2013, pp. 1700–1709. [Online]. Available: https://aclanthology.org/D131176
[10] M. Artetxe, G. Labaka, E. Agirre, and K. Cho, “Unsupervised neural machine translation,” 2018.
[11] G. Lample, A. Conneau, L. Denoyer, and M. Ranzato, “Unsupervised machine translation using monolingual corpora only,” 2018.
[12] R. Sennrich, B. Haddow, and A. Birch, “Improving neural machine translation models with monolingual data,” 2016.
[13] G. Lample, M. Ott, A. Conneau, L. Denoyer, and M. Ranzato, “Phrasebased & neural unsupervised machine translation,” 2018.
[14] R. Sennrich, B. Haddow, and A. Birch, “Neural machine translation of rare words with subword units,” 2016.
[15] A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán,
E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov, “Unsupervised crosslingual representation learning at scale,” 2020.
[16] K. Papineni, S. Roukos, T. Ward, and W.J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Philadelphia, Pennsylvania, USA: Association for Computational Linguistics, Jul. 2002, pp. 311–318. [Online]. Available: https://aclanthology.org/P02104034
[17] Y. Kim, M. Graça, and H. Ney, “When and why is unsupervised neural machine translation useless?” in Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. Lisboa, Portugal: European Association for Machine Translation, Nov. 2020, pp. 35–44. [Online]. Available: https://aclanthology.org/2020.eamt1.5
指導教授 蔡宗翰(Richard Tzong-Han Tsai) 審核日期 2021-11-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明