DC 欄位 |
值 |
語言 |
DC.contributor | 資訊工程學系 | zh_TW |
DC.creator | 鄧氏陲殷 | zh_TW |
DC.creator | Dang Thi Thuy An | en_US |
dc.date.accessioned | 2017-7-28T07:39:07Z | |
dc.date.available | 2017-7-28T07:39:07Z | |
dc.date.issued | 2017 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=104522608 | |
dc.contributor.department | 資訊工程學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 本研究開發了各種深度學習模型,以在現實環境中進行聲學場景分類(ASC)和聲音事件檢測(SED)。我們利用卷積神經網絡(CNN) 及時間遞歸神經網絡 (RNN) 用於音頻信號處理的優點來建立模型。CNN對於提取多維數據的空間信息提供了一個有效率的方法,而RNN在學習具有時間順序的數據是強大的。我們的實驗在 DCASE 2017 challenge 的三個開發數據集中進行,包括聲學場景數據集,稀有聲音事件數據集和復音聲音事件數據集。為了避免過度擬合問題,我們採用一些數據增加技術,例如以給定的概率中斷輸入值到零,增加高斯噪聲或改變聲音的響度。
提出的方法的性能對於三個 DCASE 2017 challenge的數據集優於基礎方法。聲學場景分類的準確度相對於基礎方法提高了7.2%。對於罕見的聲音事件檢測,我們的方法平均誤差率為0.26,F評分為85.9%,而基礎方法為0.53和72.7%。對於復音聲音事件檢測,我們的方法的誤差率改進為0.59,而基礎方法為0.69。 | zh_TW |
dc.description.abstract |
In this work, we develop various deep learning models to perform the acoustic scene classification (ASC) and sound event detection (SED) in real life environments. In particular, we take advantages of both convolution neural networks (CNN) and recurrent neural networks (RNN) for audio signal processing, our proposed models are constructed from these two networks. CNNs provide an effective way to capture spatial information of multidimensional data, while RNNs are powerful in learning temporal sequential data. We conduct experiments on three development datasets from the DCASE 2017 challenge including acoustic scene dataset, rare sound event dataset, and polyphonic sound event dataset. In order to reduce overfitting problem as the data is limited, we employ some data augmentation techniques such as interrupting input values to zeros with a given probability, adding Gaussian noise, and changing sound loudness.
The performance of proposed methods outperforms the baselines of DCASE 2017 challenge over all three datasets. The accuracy of acoustic scene classification improves 7.2 % in comparison with the baseline. For rare sound event detection, we report an average error rate of 0.26 and F-score of 85.9% compared to 0.53 and 72.7% of baselines. For polyphonic sound event detection, our method obtains a slight improvement on an error rate of 0.59 while the baseline of 0.69. | en_US |
DC.subject | 深度學習 | zh_TW |
DC.subject | CNNs | zh_TW |
DC.subject | RNNs | zh_TW |
DC.subject | 場景分類 | zh_TW |
DC.subject | 聲音事件檢測 | zh_TW |
DC.subject | Deep learning | en_US |
DC.subject | CNNs | en_US |
DC.subject | RNNs | en_US |
DC.subject | scene classification | en_US |
DC.subject | sound event detection | en_US |
DC.title | 基於深度學習之聲音辨識及偵測 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | Sound Classification and Detection using Deep Learning | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |