中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/77737
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 42409566      Online Users : 988
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/77737


    Title: 基於注意力之英中對譯係統;English Chinese Translation System Using Attention Model
    Authors: 楊芷璇;Yang, Chih-Hsuan
    Contributors: 資訊工程學系
    Keywords: 機器翻譯;神經機器翻譯;深度學習;自然語言處理;Machine translation;Neural machine translation;Deep learning;Natural language processing
    Date: 2018-08-16
    Issue Date: 2018-08-31 14:54:28 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 深度類神經網路(Deep neural network)在自然語言處理的領域中有驚人的效果,機器翻譯是自然語言處理中一個重要項目,主要依賴於兩種類神經網路架構,卷積神經網路(Convolutional neural network)與遞迴神經網路(Recurrent neural network),但機器翻譯的結果好壞取決於翻譯語言的詞彙、文法結構,往往深度模型翻譯出來的句子都存在著文法不通順、雙語語句或詞彙無法對齊等問題,而近年來Google團隊提出不使用卷積神經網路與遞迴類神經網路的注意力模型—Transformer,只使用編碼器與解碼器模型加上注意力(Attention)機制在機器翻譯上就有顯著結果,本論文提出的架構就是以Transformer為基底模型,模型由多層式的編碼器與解碼器組成,利用多頭注意力機制,將翻譯的來源語言句子與目標語言句子進行相似度配對,來對齊兩語言各個詞彙,本論文的目標為提升翻譯模型的質量,使翻譯的結果更加精進,所以本論文提出的架構對transformer做了改進,將殘差與密集連接運用於transformer中,避免在模型計算注意力時,因為多層的傳遞造成資訊遺失,因此將前層的資訊與後層做連接,藉此優化訓練的模型。最後在實驗上,會將提出的方法與transformer原始模型應用於英中翻譯系統,並使用機器翻譯評估方法BLEU與WER進行比較,結果顯示提出注意力模型的翻譯效果比transformer模型好。;Deep neural network (DNN) has performed impressively in the natural language processing. Machine Translation is one of the important project in natural language processing. It depends on two kinds of neural network architectures, convolutional neural network (CNN) and recurrent neural network(RNN). But the result of machine translation is based on the vocabulary and grammatical structure, the sentences translated by the deep learning model may cause some problems such as grammar errors and bilingual vocabulary misalignment. In recent years, the Google team propose attention model--transformer which does not use convolutional neural network and recurrent neural network, and get significant result by using the attention mechanism on encoder and decoder. The architecture proposed in this paper is based on transformer. The model consists of multilayer encoder and decoder. Using multi-head attention to match the sentence of source language with the sentence of target language and align the vocabulary of two languages. The goal in this paper is to improve the quality of translation results, so we propose the architecture which applies residual and dense connection on transformer to avoid information loss. Therefore, back layer is connected with the previous layer to optimize the model. Finally, we will apply proposed architecture and baseline model on English-Chinese translation system in the experiment, and use BLEU and WER to compared two translate sentence. And the translation result of proposed attention architecture is better than baseline model.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML129View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明