博碩士論文 107522063 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator陳柏凱zh_TW
DC.creatorPo-Kai Chenen_US
dc.date.accessioned2020-7-29T07:39:07Z
dc.date.available2020-7-29T07:39:07Z
dc.date.issued2020
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=107522063
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract近幾年語音識別研究逐漸轉向端到端模型發展,簡化了整體模型的流程。而2015年的“Listen, Attention and Spell”論文中,首次將Seq-to-Seq的架構以及Attention機制用於端到端語音識別任務中,奠定了目前端到端語音識別模型的型式。遺憾的是,Attention是基於全序列建模,因而無法識別片段序列,也因此無法完美地應用於串流的場合中。基於Attention片段序列識別的問題,本論文使用Layer-level Time Limited Attention Mask(L-TLAM),提高了模型對非完整序列之建模能力,並減緩因堆疊網路所產生出的過多間接注意力問題,以達到更完美串流語音識別效果。 標點符號是文本資訊的組成部分,用以表示停頓、語氣以及詞語的性質和作用。然而一般用於訓練語音識別之語料皆未提供標點符號之標注,因而在語音識別任務中,無法直接提供具有標點符號的識別結果。本論文第二個工作,為了將標點符號標記任務融入於語音識別訓練中,我們基於Transducer模型架構來訓練語音識別主任務,並利用Multi-task Learning的訓練方式,將Transducer架構中的語言模型Predictor共享於兩種任務1) Context Representation for Acoustic Model 2) Punctuation Prediction。第一種任務提供了ASR任務中所需的文本上下文資訊。第二種任務提供了預測Punctuation之文本語意資訊。而最後本論文也嘗試將Language Model任務導入,以提高Predictor的語意理解能力,進而提高語音識別與標點預測任務的準確度。zh_TW
dc.description.abstractIn recent years, speech recognition research has gradually turned to the development of end-to-end models, simplifying the overall model process. In the "Listen, Attention and Spell" paper in 2015, the Seq-to-Seq architecture and Attention mechanism were used for the end-to-end speech recognition task for the first time, laying the current end-to-end speech recognition model. Unfortunately, Attention is based on full-sequence modeling, so it cannot identify fragment sequences, and therefore cannot be perfectly used in streaming applications. Based on the problem of Attention fragment sequence identification, this paper uses Layer-level Time Limited Attention Mask (L-TLAM), which improves the model′s ability to model non-complete sequences and alleviates excessive indirect attention due to stacked networks problems to achieve more perfect streaming speech recognition effect. Punctuation marks are an integral part of textual information, used to indicate pauses, tone, and the nature and function of words. However, the corpus generally used for training speech recognition does not provide punctuation marks, so in speech recognition tasks, it is impossible to directly provide recognition results with punctuation marks. In the second work of this paper, in order to integrate the punctuation mark task into speech recognition training, we train the speech recognition main task based on the Transducer model architecture, and use the Multi-task Learning training method to convert the language model Predictor in the Transducer architecture. Shared in two tasks 1) Context Representation for Acoustic Model 2) Punctuation Prediction. The first task provides the textual context information required in the ASR task. The second task provides textual semantic information to predict Punctuation. In the end, the thesis also tried to import Language Model task to improve Predictor′s semantic comprehension ability, and then improve the accuracy of speech recognition and punctuation prediction tasks.en_US
DC.subject多任務學習zh_TW
DC.subject端到端zh_TW
DC.subject串流語音識別zh_TW
DC.subject標點符號預測zh_TW
DC.subjectmulti-task learningen_US
DC.subjectend-to-enden_US
DC.subjectstreaming speech recognitionen_US
DC.subjectpunctuation predictionen_US
DC.title具有標點符號之端到端串流語音識別於多任務學習zh_TW
dc.language.isozh-TWzh-TW
DC.titleEnd-to-End Streaming Speech Recognition with Punctuation Marks for Multi-task Learningen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明