博碩士論文 104522608 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator鄧氏陲殷zh_TW
DC.creatorDang Thi Thuy Anen_US
dc.date.accessioned2017-7-28T07:39:07Z
dc.date.available2017-7-28T07:39:07Z
dc.date.issued2017
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=104522608
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract本研究開發了各種深度學習模型,以在現實環境中進行聲學場景分類(ASC)和聲音事件檢測(SED)。我們利用卷積神經網絡(CNN) 及時間遞歸神經網絡 (RNN) 用於音頻信號處理的優點來建立模型。CNN對於提取多維數據的空間信息提供了一個有效率的方法,而RNN在學習具有時間順序的數據是強大的。我們的實驗在 DCASE 2017 challenge 的三個開發數據集中進行,包括聲學場景數據集,稀有聲音事件數據集和復音聲音事件數據集。為了避免過度擬合問題,我們採用一些數據增加技術,例如以給定的概率中斷輸入值到零,增加高斯噪聲或改變聲音的響度。 提出的方法的性能對於三個 DCASE 2017 challenge的數據集優於基礎方法。聲學場景分類的準確度相對於基礎方法提高了7.2%。對於罕見的聲音事件檢測,我們的方法平均誤差率為0.26,F評分為85.9%,而基礎方法為0.53和72.7%。對於復音聲音事件檢測,我們的方法的誤差率改進為0.59,而基礎方法為0.69。zh_TW
dc.description.abstract In this work, we develop various deep learning models to perform the acoustic scene classification (ASC) and sound event detection (SED) in real life environments. In particular, we take advantages of both convolution neural networks (CNN) and recurrent neural networks (RNN) for audio signal processing, our proposed models are constructed from these two networks. CNNs provide an effective way to capture spatial information of multidimensional data, while RNNs are powerful in learning temporal sequential data. We conduct experiments on three development datasets from the DCASE 2017 challenge including acoustic scene dataset, rare sound event dataset, and polyphonic sound event dataset. In order to reduce overfitting problem as the data is limited, we employ some data augmentation techniques such as interrupting input values to zeros with a given probability, adding Gaussian noise, and changing sound loudness. The performance of proposed methods outperforms the baselines of DCASE 2017 challenge over all three datasets. The accuracy of acoustic scene classification improves 7.2 % in comparison with the baseline. For rare sound event detection, we report an average error rate of 0.26 and F-score of 85.9% compared to 0.53 and 72.7% of baselines. For polyphonic sound event detection, our method obtains a slight improvement on an error rate of 0.59 while the baseline of 0.69.en_US
DC.subject深度學習zh_TW
DC.subjectCNNszh_TW
DC.subjectRNNszh_TW
DC.subject場景分類zh_TW
DC.subject聲音事件檢測zh_TW
DC.subjectDeep learningen_US
DC.subjectCNNsen_US
DC.subjectRNNsen_US
DC.subjectscene classificationen_US
DC.subjectsound event detectionen_US
DC.title基於深度學習之聲音辨識及偵測zh_TW
dc.language.isozh-TWzh-TW
DC.titleSound Classification and Detection using Deep Learningen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明