中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/8920
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 81570/81570 (100%)
造访人次 : 47021294      在线人数 : 122
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/8920


    题名: 國語語音強健辨認之研究;Robust speech recognition in noisy environments
    作者: 黃國彰;Kuo-Chang Huang
    贡献者: 電機工程研究所
    关键词: 強健特徵參數;模型補償;robust features;model compensation
    日期: 2003-05-30
    上传时间: 2009-09-22 11:37:41 (UTC+8)
    出版者: 國立中央大學圖書館
    摘要: Despite sophisticated present day automatic speech recognition (ASR) techniques, a single recognizer is usually incapable of accounting for the varying conditions in a typical natural environment. Higher robustness to a range of noise cases can potentially be achieved by combining the results of several recognizers operating in parallel. To overcome this problem and improve the performance of speech recognition systems in additive conditions, special attention should be paid to the problem of robust feature and compensation of models. This thesis is concerned with the problem of noise-resistance applied to automatic speaker-independent speech recognition. The two problems of the model compensation and robust feature are treated in this work. In model compensation stage, first, we investigate a projection-based group delay scheme (PGDS) likelihood measure that significantly reduces noise contamination in speech recognition. Because the norm of the cepstral/GDS vector will be shrinked when the speech signals are corrupted by additive noise, the HMM parameters, namely, the mean vector and the covariance matrix, need to be furthermore modified. The proposed approach compensates the mean vector using a projection-based scale factor and the mean compensation bias, and fits the covariance matrix using a variance adaptive function. The bias and variance adaptive functions estimated from the training and/or testing data were used to balance the mismatch between different environments. Lastly, a state duration method was utilized to deal with the problem that the additive noise segments the error path in Viterbi decoding. Secondly, we proposed a model compensation method that is similar to parallel model combination. The basis of the method is the fact that the autocorrelation function of the signal resulting from the addition of two statistically independent signals is equal to the sum of their individual autocorrelation functions. Therefore, in adjusting a clean model, its state spectral representation is transformed from the autoregressive, or cepstral, domain to the autocorrelation domain. Then, the autocorrelation of the clean model is added to a sample of the autocorrelation of the additive noise, resulting in the autocorrelation of the noisy signal, which is transformed back to the original spectral representation. At the end of this process, an adjusted model results with better capabilities of handling the noisy signal. Most speech recognition systems are based on cepstral coefficients and their first- and second order derivatives. The derivatives are normally approximated by fitting a linear regression line to a fixed-length segment of consecutive frames. The time resolution and smoothness of the estimated derivative depends on the length of the segment. Herein, we present an approach to improve the representation of speech dynamics, which is based on the combination of multiple time resolutions. To illustrate the procedure, we take two different sets of feature combinations. In the first system, we combine separated input used different features, i.e. the cepstral and group delay spectrum coefficients leading to higher performance in all noise condition. In the second system, we extract feature over variable sized windows of three or five times the original windows size. Capturing different information in different feature combination or in multi-scale features being more robust to noise, the robust integration system gained a significant performance improvement in both clean speech and in real environmental noise. Despite sophisticated present day automatic speech recognition (ASR) techniques, a single recognizer is usually incapable of accounting for the varying conditions in a typical natural environment. Higher robustness to a range of noise cases can potentially be achieved by combining the results of several recognizers operating in parallel. To overcome this problem and improve the performance of speech recognition systems in additive conditions, special attention should be paid to the problem of robust feature and compensation of models. This thesis is concerned with the problem of noise-resistance applied to automatic speaker-independent speech recognition. The two problems of the model compensation and robust feature are treated in this work. In model compensation stage, first, we investigate a projection-based group delay scheme (PGDS) likelihood measure that significantly reduces noise contamination in speech recognition. Because the norm of the cepstral/GDS vector will be shrinked when the speech signals are corrupted by additive noise, the HMM parameters, namely, the mean vector and the covariance matrix, need to be furthermore modified. The proposed approach compensates the mean vector using a projection-based scale factor and the mean compensation bias, and fits the covariance matrix using a variance adaptive function. The bias and variance adaptive functions estimated from the training and/or testing data were used to balance the mismatch between different environments. Lastly, a state duration method was utilized to deal with the problem that the additive noise segments the error path in Viterbi decoding. Secondly, we proposed a model compensation method that is similar to parallel model combination. The basis of the method is the fact that the autocorrelation function of the signal resulting from the addition of two statistically independent signals is equal to the sum of their individual autocorrelation functions. Therefore, in adjusting a clean model, its state spectral representation is transformed from the autoregressive, or cepstral, domain to the autocorrelation domain. Then, the autocorrelation of the clean model is added to a sample of the autocorrelation of the additive noise, resulting in the autocorrelation of the noisy signal, which is transformed back to the original spectral representation. At the end of this process, an adjusted model results with better capabilities of handling the noisy signal. Most speech recognition systems are based on cepstral coefficients and their first- and second order derivatives. The derivatives are normally approximated by fitting a linear regression line to a fixed-length segment of consecutive frames. The time resolution and smoothness of the estimated derivative depends on the length of the segment. Herein, we present an approach to improve the representation of speech dynamics, which is based on the combination of multiple time resolutions. To illustrate the procedure, we take two different sets of feature combinations. In the first system, we combine separated input used different features, i.e. the cepstral and group delay spectrum coefficients leading to higher performance in all noise condition. In the second system, we extract feature over variable sized windows of three or five times the original windows size. Capturing different information in different feature combination or in multi-scale features being more robust to noise, the robust integration system gained a significant performance improvement in both clean speech and in real environmental noise.
    显示于类别:[電機工程研究所] 博碩士論文

    文件中的档案:

    档案 大小格式浏览次数


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明