摘要: | 在自然語言處理中,目前已知的模型大部分都有支援中文翻譯或者對話的生成。但是我們知道,中文分為簡體中文以及繁體中文。 然而這些模型支援的中文多為簡體中文, 雖然同樣都是中文,但它們的用詞以及用法都不盡相同。
由於缺乏繁體中文的翻譯和對話數據,本文將以翻譯和對話相結合來進行。也就是說,我們做了繁體中文和英文的雙向翻譯,以及英文的對話。訓練翻譯的數據來以下的新聞以及線上課程網站:The China Post、Voice Tube和Hope English,並且對話數據來自dailydialog,之後我們使用Hi Tutor和 TOCFL來做最後的測試。我們藉由 mBART50 以及 DialoGPT 兩種模型的合併並且使用微調的方式來生成繁體中文的對話。
我們微調出來的翻譯模型其結果皆比原來的模型好,尤其是在beam size值為7時。對話模型在微調後的結果顯示,在小型的模型中生成的對話最為流暢。在最後的實驗中,我們運用了參數 beam size、top k 和 top p 找出能夠產生最佳結果的數值,分別為:7、10和0.95。我們最好的模型在最後的測試中的分數為2.85。最後,我們使用微調出來的最好的模型生成了一個藉由英文對話而產生的繁體中文的對話。;In Natural Language Processing ( NLP ), as far as we know, lots of the currently known models support translation or dialogue generation in Chinese. But we know that Chinese is divided into simplified Chinese and traditional Chinese. However, the Chinese supported by these models are mostly simplified Chinese. Although they are all Chinese, their characters and usage are not the same.
Due to a lack of translation and dialogue data in Traditional Chinese, we use a combination of translation and dialogue with English as the pivot in this paper. In other words, we have made a two-way translation between traditional Chinese and English, as well as a pivotal dialogue in English. To accomplish the translation in the training part, we use data collected from the following news sources and online classes: The China Post, Voice Tube, and Hope English. Moreover, we use dailydialog to train the English dialogue. Then, for the final test, we adopt a traditional Chinese dialogue from Hi Tutor and TOCFL. We utilize mBART50 and DialoGPT to generate the traditional Chinese dialogue with fine-tuning.
The results of our fine-tuning models are better than the original models without fine-tuning. Especially when the beam size is 7 in the translation. After fine-tuning the dialogue model, the result shows that the dialogue generated from the small size model is the smoothest. In the final experiment, we use the parameters beam size, top k, and top p to produce the best results in our model, respectively: 7, 10, and 0.95. The bleu score of the final test in our best model is 2.85. Finally, using the best model, we build a traditional Chinese dialogue utilizing English conversations as the pivot. |