博碩士論文 106522602 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator萬程玲zh_TW
DC.creatorJoin Wan Chanlyn Sigalinggingen_US
dc.date.accessioned2019-7-25T07:39:07Z
dc.date.available2019-7-25T07:39:07Z
dc.date.issued2019
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=106522602
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract摘要 .隨著計算功能強大的計算機硬件已經為許多用戶所用,智能手機,平板電腦和筆記本電腦等語音處理設備的數量也在增加。因此,語音在許多應用中起著重要作用,例如免提電話,數字助聽器,基於語音的計算機接口或家庭娛樂系統。當語音增強算法的輸入或輸出信號被噪聲破壞時,它們試圖改善通信系統的性能。為了解決這些問題,我們提出了一種分層極端學習機(H-ELM)框架,旨在根據一組隨機選擇的隱藏單元和分析確定的輸出,有效,快速地從單通道語音信號中去除背景噪聲。通過利用稀疏自動編碼器進行權重和部署。最近傳統上採用多任務學習和轉移學習方法來改善深度學習模型的性能。採用這兩種方法,我們在本研究中建立了H-ELM模型適應性,以研究H-ELM的兼容性並實現性能的進一步提高。我們訓練Aurora-4並由TIMIT調整以幫助以前訓練過的模型。我們還使用特徵掩模理想比率掩蔽(IRM)來比較我們實驗中的特徵圖。實驗結果表明,基於H-ELM和H-ELM模型自適應的語音增強技術始終優於傳統的DDAE框架,H-ELM模型適應可以在標準化的客觀評估方面提高適應H-ELM TIMIT的性能。各種測試條件。除此之外,特徵掩碼IRM略好於特徵映射。 zh_TW
dc.description.abstractABSTRACT As computationally powerful computer hardware has become available to many users, the number of speech processing devices such as smartphones, tablets and notebooks has increased. As a consequence, speech plays an important role in many applications, e.g., hands-free telephony, digital hearing aids, speech-based computer interfaces, or home entertainment systems. Speech enhancement algorithms attempt to improve the performance of communication systems when their input or output signals are corrupted by noise. To address these issues, we present a hierarchical extreme learning machine (H-ELM) framework, aimed at the effective and fast removal of background noise from a single-channel speech signal, based on a set of randomly chosen hidden units and analytically determined output weights and deployed by leveraging sparse autoencoders. Multi-task learning and transfer learning approaches have conventionally been adopted recently to improve the performances of deep learning models. Adopt these two approaches we build H-ELM model adaptation in this study, to investigate the compatibility of H-ELM and achieve further improvements in the performance. We train the Aurora-4 and adapted by TIMIT to help of the previously trained model. We also use feature mask Ideal Ratio Masking (IRM) to compared feature map on our experiments. The experimental results indicate that both H-ELM and H-ELM model adaptation based speech enhancement techniques consistently outperform the conventional DDAE framework and H-ELM model adaptation can improve the performance adapted to H-ELM TIMIT, in terms of standardized objective evaluations, under various testing conditions. Beside that, the feature mask IRM is slightly better than feature map. en_US
DC.subjectdeep denoising autoencoderzh_TW
DC.subject hierarchical extreme learningzh_TW
DC.subject model adaptationzh_TW
DC.subject IRMzh_TW
DC.subject speech enhancementzh_TW
DC.subjectdeep denoising autoencoderen_US
DC.subject hierarchical extreme learningen_US
DC.subject model adaptationen_US
DC.subject IRMen_US
DC.subject speech enhancementen_US
DC.title基於H-ELM調適之跨資料集語音增強zh_TW
dc.language.isozh-TWzh-TW
DC.titleH-ELM Model Adaptation for Across-corpora Speech Enhancementen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明