博碩士論文 107221006 完整後設資料紀錄

DC 欄位 語言
DC.contributor數學系zh_TW
DC.creator廖翊均zh_TW
DC.creatorYi-Chun Liaoen_US
dc.date.accessioned2022-1-14T07:39:07Z
dc.date.available2022-1-14T07:39:07Z
dc.date.issued2022
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=107221006
dc.contributor.department數學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract在自然語言處理中,目前已知的模型大部分都有支援中文翻譯或者對話的生成。但是我們知道,中文分為簡體中文以及繁體中文。 然而這些模型支援的中文多為簡體中文, 雖然同樣都是中文,但它們的用詞以及用法都不盡相同。 由於缺乏繁體中文的翻譯和對話數據,本文將以翻譯和對話相結合來進行。也就是說,我們做了繁體中文和英文的雙向翻譯,以及英文的對話。訓練翻譯的數據來以下的新聞以及線上課程網站:The China Post、Voice Tube和Hope English,並且對話數據來自dailydialog,之後我們使用Hi Tutor和 TOCFL來做最後的測試。我們藉由 mBART50 以及 DialoGPT 兩種模型的合併並且使用微調的方式來生成繁體中文的對話。 我們微調出來的翻譯模型其結果皆比原來的模型好,尤其是在beam size值為7時。對話模型在微調後的結果顯示,在小型的模型中生成的對話最為流暢。在最後的實驗中,我們運用了參數 beam size、top k 和 top p 找出能夠產生最佳結果的數值,分別為:7、10和0.95。我們最好的模型在最後的測試中的分數為2.85。最後,我們使用微調出來的最好的模型生成了一個藉由英文對話而產生的繁體中文的對話。zh_TW
dc.description.abstractIn Natural Language Processing ( NLP ), as far as we know, lots of the currently known models support translation or dialogue generation in Chinese. But we know that Chinese is divided into simplified Chinese and traditional Chinese. However, the Chinese supported by these models are mostly simplified Chinese. Although they are all Chinese, their characters and usage are not the same. Due to a lack of translation and dialogue data in Traditional Chinese, we use a combination of translation and dialogue with English as the pivot in this paper. In other words, we have made a two-way translation between traditional Chinese and English, as well as a pivotal dialogue in English. To accomplish the translation in the training part, we use data collected from the following news sources and online classes: The China Post, Voice Tube, and Hope English. Moreover, we use dailydialog to train the English dialogue. Then, for the final test, we adopt a traditional Chinese dialogue from Hi Tutor and TOCFL. We utilize mBART50 and DialoGPT to generate the traditional Chinese dialogue with fine-tuning. The results of our fine-tuning models are better than the original models without fine-tuning. Especially when the beam size is 7 in the translation. After fine-tuning the dialogue model, the result shows that the dialogue generated from the small size model is the smoothest. In the final experiment, we use the parameters beam size, top k, and top p to produce the best results in our model, respectively: 7, 10, and 0.95. The bleu score of the final test in our best model is 2.85. Finally, using the best model, we build a traditional Chinese dialogue utilizing English conversations as the pivot.en_US
DC.subject機器翻譯zh_TW
DC.subject對話生成zh_TW
DC.subjectmBART50zh_TW
DC.subjectDialoGPTzh_TW
DC.subjectMachine Translationen_US
DC.subjectDialogue Generationen_US
DC.subjectmBART50en_US
DC.subjectDialoGPTen_US
DC.title藉由mBART50和DialoGPT所生成之繁體中文對話zh_TW
dc.language.isozh-TWzh-TW
DC.titleDialogue Generation in Traditional Chinese Utilizing the Algorithm Based on mBART50 and DialoGPTen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明