博碩士論文 105522118 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:16 、訪客IP:18.116.13.113
姓名 石朝全(Chao Chuang Shih)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 使用轉移學習來改進針對命名實體音譯的樞軸語言方法
(Using transfer learning to improve pivot language approach to named entity transliteration)
相關論文
★ A Real-time Embedding Increasing for Session-based Recommendation with Graph Neural Networks★ 基於主診斷的訓練目標修改用於出院病摘之十代國際疾病分類任務
★ 混合式心臟疾病危險因子與其病程辨識 於電子病歷之研究★ 基於 PowerDesigner 規範需求分析產出之快速導入方法
★ 社群論壇之問題檢索★ 非監督式歷史文本事件類型識別──以《明實錄》中之衛所事件為例
★ 應用自然語言處理技術分析文學小說角色 之關係:以互動視覺化呈現★ 基於生醫文本擷取功能性層級之生物學表徵語言敘述:由主成分分析發想之K近鄰算法
★ 基於分類系統建立文章表示向量應用於跨語言線上百科連結★ Code-Mixing Language Model for Sentiment Analysis in Code-Mixing Data
★ 藉由加入多重語音辨識結果來改善對話狀態追蹤★ 對話系統應用於中文線上客服助理:以電信領域為例
★ 應用遞歸神經網路於適當的時機回答問題★ 使用多任務學習改善使用者意圖分類
★ 基於歷史資訊向量與主題專精程度向量應用於尋找社群問答網站中專家★ 使用YMCL模型改善使用者意圖分類成效
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 機器翻譯已經被研究多年,雖然多數句型可以被順利翻譯,但若句子包含命名實體如人名或地名,仍然有無法成功以該語言文字表現的窘境,這種情形在英語以外的語言之間的轉換也更加嚴重,而命名實體音譯即是此問題的解決方法之一。

音譯問題是機器翻譯很重要的一部分,但當我們實際要研究這個問題時,我們時常會發生僅有有限的來源語言和目標語言之間的平行語料的狀況,尤其當其中一種語言為低資源語言,這種狀況的發生機率就會大大提升。相對地,若我們將廣泛使用的語言(如:英文)視為樞軸語言,我們可能可以更加容易取得來源語言和樞軸語言或是樞軸語言和目標語言的平行語料,從這兩種語料中,我們可以很直觀地藉由找出共同的樞軸語言條目,來產生包含來源語言、樞軸語言以及目標語言的三語言平行語料,以解決原本雙語間的音譯問題。然而,這種方法卻會浪費大量得來不易的資料。

因此,我們提出了一個採用了注意力機制以及轉移學習的Seq2Seq模型,除了三種語言的平行語料外,可以有效利用剩餘資料,增進從來源語言到目標語言的命名實體音譯問題之表現。
摘要(英) Machine translation has been research for a long time. Although most of the sentences can be translated correctly, when it comes to named entity like a personal name or a location in a sentence, there′s still room for improvement especially between non-English languages. Named Entity Transliteration is a way to solve the condition mentioned above.

Transliteration is a key part of machine translation. However when we actually do research, we often have limited parallel data between source language and target language. If we take a wildly used language as a pivot langage, in contract, it would be more easily to extract language pairs of source language to pivot language and pivot language to target language. It′s intuitive to extract the common pivot language entities from these corpora to generate a three-language parallel data include source language, pivot language, target language. We can achieve the bilingual transliteration task using the parallel data; nevertheless, large amount of data is wasted in this method.

We propose a modified attention-based sequence-to-sequence model which also applies transfer learning techniques. Our model effectively utilize the remaining data besides the parallel data to promote the performance of named entity transliteration.
關鍵字(中) ★ 機器音譯
★ 機器翻譯
★ 命名實體音譯
★ 雙語音譯
★ 轉移學習
★ 注意力機制
★ Seq2Seq模型
★ 樞軸語言
關鍵字(英) ★ Machine Transliteration
★ Machine Translation
★ Named Entity Transliteration
★ Bilingual transliteration
★ Transfer Learning
★ Attention Mechanism
★ Seq2Seq Model
★ Pivot language
★ Bridge Language
論文目次 摘要i
Abstract ii
誌謝iv
目錄vii
一、緒論1
1.1 研究動機.................................................................. 3
1.2 問題描述.................................................................. 4
1.3 章節概要.................................................................. 5
二、相關研究6
2.1 音譯........................................................................ 6
2.1.1 雙語音譯......................................................... 6
2.1.2 利用有限平行語料及低資源語言問題..................... 7
2.2 轉移學習(Transfer Learning) ........................................ 7
三、系統架構9
3.1 來源語言與樞軸語言的編碼器相似化.............................. 11
3.2 樞軸語言音譯至目標語言............................................. 13
3.3 來源語言音譯至目標語言............................................. 15
四、實驗方法與討論17
4.1 資料集..................................................................... 17
4.2 資料前處理............................................................... 18
4.3 評估機制.................................................................. 18
4.3.1 Word Accuracy (ACC)........................................ 18
4.3.2 Mean F-score .................................................... 19
4.4 參數描述.................................................................. 20
4.5 實驗結果.................................................................. 21
4.6 分析與討論............................................................... 22
五、結論與未來展望25
參考文獻26
附錄A 部分實際音譯結果28
參考文獻 [1] N. Chen, X. Duan, M. Zhang, R. E. Banchs, and H. Li, “Whitepaper on NEWS
2018 Shared Task on Machine Transliteration,” p. 8,
[2] Y.-C. Wang, C.-K. Wu, and R. T.-H. Tsai, “Cross-language and Cross-encyclopedia
Article Linking Using Mixed-language Topic Model and Hypernym Translation,”
in Proceedings of the 52nd Annual Meeting of the Association for Computational
Linguistics (Volume 2: Short Papers), Baltimore, Maryland: Association for Computational
Linguistics, 2014, pp. 586–591. doi: 10.3115/v1/P14-2096. [Online].
Available: http://aclweb.org/anthology/P14-2096.
[3] P. Sorg and P. Cimiano, “Enriching the Crosslingual Link Structure of Wikipedia
—A Classification-Based Approach,” p. 6, 2008.
[4] J. Oh, D. Kawahara, K. Uchimoto, J. Kazama, and K. Torisawa, “Enriching Multilingual
Language Resources by Discovering Missing Cross-Language Links in
Wikipedia,” in 2008 IEEE/WIC/ACM International Conference on Web Intelligence
and Intelligent Agent Technology, vol. 1, Dec. 2008, pp. 322–328. doi: 10.
1109/WIIAT.2008.317.
[5] S. Ganesh, S. Harsha, P. Pingali, and V. Varma, “Statistical Transliteration for
Cross Langauge Information Retrieval using HMM alignment and CRF,” p. 6, 2008.
[6] S. Gella, J. Sharma, and K. Bali, “Query word labeling and Back Transliteration
for Indian Languages: Shared task system description,” p. 6, 2013.
[7] M. G. A. Malik, C. Boitet, and P. Bhattacharyya, “Hindi Urdu machine transliteration
using finite-state transducers,” in Proceedings of the 22nd International Conference
on Computational Linguistics - COLING ’08, vol. 1, Manchester, United
Kingdom: Association for Computational Linguistics, 2008, pp. 537–544, isbn: 978-
1-905593-44-6. doi: 10 . 3115 / 1599081 . 1599149. [Online]. Available: http : / /
portal.acm.org/citation.cfm?doid=1599081.1599149 (visited on 01/28/2019).
[8] W. Ammar, C. Dyer, and N. Smith, “Transliteration by Sequence Labeling with
Lattice Encodings and Reranking,” p. 5, 2012.
[9] D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation by Jointly
Learning to Align and Translate,” Sep. 1, 2014. arXiv: 1409.0473 [cs, stat].
[Online]. Available: http://arxiv.org/abs/1409.0473 (visited on 01/16/2019).
[10] M. Bisani and H. Ney, “Joint-sequence models for grapheme-to-phoneme conversion,”
Speech Communication, vol. 50, no. 5, pp. 434–451, May 2008, issn: 01676393.
doi: 10.1016/j.specom.2008.01.002. [Online]. Available: https://linkinghub.
elsevier.com/retrieve/pii/S0167639308000046 (visited on 01/21/2019).
[11] S. Jiampojamarn, C. Cherry, and G. Kondrak, “Joint Processing and Discriminative
Training for Letter-to-Phoneme Conversion,” p. 9, 2008.
[12] S. Jiampojamarn, A. Bhargava, Q. Dou, K. Dwyer, and G. Kondrak, “DirecTL:
A language-independent approach to transliteration,” in Proceedings of the 2009
Named Entities Workshop: Shared Task on Transliteration - NEWS ’09, Suntec, Singapore:
Association for Computational Linguistics, 2009, p. 28, isbn: 978-1-932432-
57-2. doi: 10.3115/1699705.1699712. [Online]. Available: http://portal.acm.
org/citation.cfm?doid=1699705.1699712 (visited on 01/21/2019).
[13] A. Finch and E. Sumita, “A Bayesian Model of Bilingual Segmentation for Transliteration,”
p. 8, 2010.
[14] A. Finch, L. Liu, X. Wang, and E. Sumita, “Target-Bidirectional Neural Models
for Machine Transliteration,” in Proceedings of the Sixth Named Entity Workshop,
Berlin, Germany: Association for Computational Linguistics, 2016, pp. 78–82. doi:
10.18653/v1/W16- 2711. [Online]. Available: http://aclweb.org/anthology/
W16-2711 (visited on 01/26/2019).
[15] M. M. Khapra, A. Kumaran, and P. Bhattacharyya, “Everybody loves a rich cousin:
An empirical study of transliteration through bridge languages,” p. 9, 2010.
[16] M. Zhang, X. Duan, V. Pervouchine, and H. Li, “Machine Transliteration: Leveraging
on Third Languages,” p. 9, 2010.
[17] L. Haizhou, Z. Min, and S. Jian, “A joint source-channel model for machine transliteration,”
in Proceedings of the 42nd Annual Meeting on Association for Computational
Linguistics - ACL ’04, Barcelona, Spain: Association for Computational
Linguistics, 2004, 159–es. doi: 10 . 3115 / 1218955 . 1218976. [Online]. Available:
http://portal.acm.org/citation.cfm?doid=1218955.1218976 (visited on
01/19/2019).
[18] B. Zoph, D. Yuret, J. May, and K. Knight, “Transfer Learning for Low-Resource
Neural Machine Translation,” Apr. 7, 2016. arXiv: 1604 . 02201 [cs]. [Online].
Available: http://arxiv.org/abs/1604.02201.
[19] T. Q. Nguyen and D. Chiang, “Transfer Learning across Low-Resource, Related
Languages for Neural Machine Translation,” Aug. 31, 2017. arXiv: 1708 . 09803
[cs]. [Online]. Available: http : / / arxiv . org / abs / 1708 . 09803 (visited on
01/21/2019).
[20] X. N. Agency., “Chinese transliteration of foreign personal names,” The Commercial
Press, 1992.
指導教授 蔡宗翰 審核日期 2019-1-31
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明