機器翻譯是自然語言處理中熱門的研究主題之一,歷年來都有許多模型被提出,其中Transformer運用多向注意力(Multi-head Attention)機制大幅提升了機器翻譯的準確度,但多數研究卻還是專注在模型的創新及架構的調整,而不是對原生的Transformer進行優化,因此本研究將針對Transformer中的多向注意力進行改良,以遮罩的方式在不增加訓練參數及訓練時間的情況下,增加注意力機制學習輸入句子小區資訊的能力,讓Transformer能在中英翻譯任務上提升3.6~11.3%的準確率,德英翻譯任務上提升17.4%的準確率。;Neural Machine Translation (NMT) is one of the popular research topics in Natural Language Processing (NLP). Lots of new model have been proposed by researchers throughout the world each year. Recently, a model called Transformer, which uses only attention mechanism, outperforms a lot of model in NMT. Most research on this model focus on model innovation, but not adjusting the original model itself. Therefore, this work will modify the Multi-head Self-Attention module used in this model to better learn the information about the input. The result increases the performance of the model by 3.6 to 11.3% BLEU score on Chinese-English translation and 17.4% BLEU on Dutch-English translation.