中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/44751
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41870667      線上人數 : 2470
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/44751


    題名: 強健性語音辨識及語者確認之研究;A Study of Robust Speech Recognition and Speaker Verification
    作者: 凌欣暉;E463xing-hung lan
    貢獻者: 電機工程研究所
    關鍵詞: 語音辨識;語者確認;支撐向量機;強健特徵參數;關鍵詞萃取;Keyword Spotting;Speech Recognition;speaker verification;Support vector machine;robust features
    日期: 2010-08-03
    上傳時間: 2010-12-09 13:54:27 (UTC+8)
    出版者: 國立中央大學
    摘要: 本論文可分為三個部分:關鍵詞萃取、特徵參數統計值正規化法及語者確認。在關鍵詞萃取方面, 採用次音節中的右相關音素模型串連來產生關鍵詞與無關詞模組 語音辨識系統經常因環境不匹配的影響而使辨識率大幅的下降,特徵參數統計值正規化技術有低複雜度及運算快速的優點,本論文以ARUORA 2語料庫來評估效能,統計圖等化法結合ARMA低通濾波器可將統計圖等化法之辨識率由84.93%提升至86.37%,而使用統計圖等化法結合調適性ARMA濾波器則可提升至86.91%。 語者確認系統是利用參數核函數結合高斯混合模型及支撐向量機模型,藉以提升系統效能。使用各語者的高斯混合模型參數建立超級向量,以雜訊屬性補償(NAP)修正超級向量,在訓練階段中,需將超級向量做正規化,之後利用正規化後的超級向量訓練SVM模型。而在仿冒者的選取上,則是選取與目標語者特徵最相似的前n名仿冒語音,使得訓練出來的SVM 模型更有鑑別力。而測試時以測試分數正規化技術調整距離值。從NIST 2001語料庫實驗結果顯示,64mixture的參數核函數(NAP)結合測試分數正規化之確認系統可達最好的相等錯誤率及決策成本函數分別為4.17%及0.0491。This thesis consists of three main parts:Keyword Spotting、Cepstral Feature normalization and speaker verification.In the Keyword Spotting, the use of sub-syllable models to establish the keyword and filler module. Environment mismatch is the major source of performance degradation in speech recognition. Cepstral Feature normalization Technique has been popularly used as a powerful approach to produce robust features. A common advantage of these methods is its low computation complexity. The experimental results on Aurora 2 database had shown that the Histogram Equalization and ARMA filter front-end achieved 86.37%, and Histogram Equalization and Adaptive ARMA filter front-end achieved achieved 86.91% digit recognition rates. The speaker verification combines the Gaussian Mixture Model (GMM) and Support Vector Machine (SVM) with Kernel Function.From the UBM, we can use map to get the parameters of the GMM. We used the new features to establish target supervector and imposter supervector,then we do the NAP process to modify supervector. In the train stage, we used the target supervector and imposter supervector to train SVM model. About the imposters selection, we choose the top n speaker’s whose characteristics are similar to the target which can let the model become more discriminative.In the testing stage, we used the test normalization to adjust the distance.From the experiment on NIST 2001 SRE, we can find 64mixture parametric kernel combined with result in better EER and DCF which are 4.17% and 0.0491 respectively.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML532檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明