English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78818/78818 (100%)
造訪人次 : 34652328      線上人數 : 729
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/74700


    題名: 基於深度學習之聲音辨識及偵測;Sound Classification and Detection using Deep Learning
    作者: 鄧氏陲殷;An, Dang Thi Thuy
    貢獻者: 資訊工程學系
    關鍵詞: 深度學習,CNNs,RNNs,場景分類,聲音事件檢測;Deep learning, CNNs, RNNs, scene classification, sound event detection
    日期: 2017-07-28
    上傳時間: 2017-10-27 14:36:40 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究開發了各種深度學習模型,以在現實環境中進行聲學場景分類(ASC)和聲音事件檢測(SED)。我們利用卷積神經網絡(CNN) 及時間遞歸神經網絡 (RNN) 用於音頻信號處理的優點來建立模型。CNN對於提取多維數據的空間信息提供了一個有效率的方法,而RNN在學習具有時間順序的數據是強大的。我們的實驗在 DCASE 2017 challenge 的三個開發數據集中進行,包括聲學場景數據集,稀有聲音事件數據集和復音聲音事件數據集。為了避免過度擬合問題,我們採用一些數據增加技術,例如以給定的概率中斷輸入值到零,增加高斯噪聲或改變聲音的響度。
    提出的方法的性能對於三個 DCASE 2017 challenge的數據集優於基礎方法。聲學場景分類的準確度相對於基礎方法提高了7.2%。對於罕見的聲音事件檢測,我們的方法平均誤差率為0.26,F評分為85.9%,而基礎方法為0.53和72.7%。對於復音聲音事件檢測,我們的方法的誤差率改進為0.59,而基礎方法為0.69。
    ;In this work, we develop various deep learning models to perform the acoustic scene classification (ASC) and sound event detection (SED) in real life environments. In particular, we take advantages of both convolution neural networks (CNN) and recurrent neural networks (RNN) for audio signal processing, our proposed models are constructed from these two networks. CNNs provide an effective way to capture spatial information of multidimensional data, while RNNs are powerful in learning temporal sequential data. We conduct experiments on three development datasets from the DCASE 2017 challenge including acoustic scene dataset, rare sound event dataset, and polyphonic sound event dataset. In order to reduce overfitting problem as the data is limited, we employ some data augmentation techniques such as interrupting input values to zeros with a given probability, adding Gaussian noise, and changing sound loudness.
    The performance of proposed methods outperforms the baselines of DCASE 2017 challenge over all three datasets. The accuracy of acoustic scene classification improves 7.2 % in comparison with the baseline. For rare sound event detection, we report an average error rate of 0.26 and F-score of 85.9% compared to 0.53 and 72.7% of baselines. For polyphonic sound event detection, our method obtains a slight improvement on an error rate of 0.59 while the baseline of 0.69.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML272檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明