博碩士論文 105523039 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:13 、訪客IP:18.226.181.89
姓名 曾郁豪(Yu-Hao Tseng)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 基於平行膠囊神經網路之聲音事件偵測
(Parallel Capsule Neural Networks for Sound Event Detection)
相關論文
★ 基於區域權重之衛星影像超解析技術★ 延伸曝光曲線線性特性之調適性高動態範圍影像融合演算法
★ 實現於RISC架構之H.264視訊編碼複雜度控制★ 基於卷積遞迴神經網路之構音異常評估技術
★ 具有元學習分類權重轉移網路生成遮罩於少樣本圖像分割技術★ 具有注意力機制之隱式表示於影像重建 三維人體模型
★ 使用對抗式圖形神經網路之物件偵測張榮★ 基於弱監督式學習可變形模型之三維人臉重建
★ 以非監督式表徵分離學習之邊緣運算裝置低延遲樂曲中人聲轉換架構★ 基於序列至序列模型之 FMCW雷達估計人體姿勢
★ 基於多層次注意力機制之單目相機語意場景補全技術★ 基於時序卷積網路之單FMCW雷達應用於非接觸式即時生命特徵監控
★ 視訊隨選網路上的視訊訊務描述與管理★ 基於線性預測編碼及音框基頻週期同步之高品質語音變換技術
★ 基於藉語音再取樣萃取共振峰變化之聲調調整技術★ 即時細緻可調性視訊在無線區域網路下之傳輸效率最佳化研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 人工智慧的研究過去60多年來從未停歇,隨著科技的日新月異,我們希望電腦可以像人類一樣具備學習能力,近年來因電腦圍棋alpha go一戰成名,讓更多人投入機器學習 (Machine Learning) 以及深度學習 (Deep Learning) 之領域,因此也發展出許多不同的網路架構,透過這些網路架構來讓電腦輔助人類對資料進行判斷與分類偵測。
本論文利用深度學習中的膠囊神經網路 (Capsule Neural Network, CapsNets) 作為方法,提出應用於聲音事件偵測的系統。將所提取的特徵,透過向量的方式丟入神經網路進行訓練,除了膠囊網路本身可以有效的辨別重疊事件,我們再將膠囊網路拓展為平行的膠囊網路,使每單個膠囊可以學習到更多的特徵,透過以上方法相比於DCASE 2017的Baseline錯誤率下降約41%,而與DCASE 2017 競賽第一名之架構相比,錯誤率也下降26%左右。
摘要(英) The research of artificial intelligence has never stopped for more than 60 years. With the rapid development of technology, we hope that computers can have the same learning ability as human beings. In recent years, more and more people invest in the field of machine learning and deep learning, because of the success of the alpha go. Many different network architectures have been developed to allow computers to assist humans in detecting and classifying data.
We used the Capsule Neural Network (CapsNets) in deep learning as a method. Propose a system for sound event detection. The extracted features are sent to the neural network for training through the vector. In addition to capsule network can effectively identify overlapping events, we expand the capsule network into a parallel capsule network, let per capsule can learn more features. Compared with DCASE 2017 Baseline, our proposed method error rate is reduced by about 41%. Compared with the architecture of the first place in DCASE 2017 challenge, the error rate also dropped by about 26%.
關鍵字(中) ★ 計算聽覺場景分析
★ 聲音事件偵測
★ 深度學習
★ 膠囊神經網路
關鍵字(英) ★ Computational Auditory Scene Analysis
★ Sound Event Detection
★ Deep learning
★ Capsule neural network
論文目次 摘要 i
Parallel Capsule Networks for Sound Event Detection Abstract ii
誌謝 iii
目錄 iv
圖目錄 vi
表目錄 viii
第一章 緒論 1
1-1 研究動機與背景 1
1-2 論文架構 3
第二章 聲音事件偵測 4
2-1 聲音場景分類及事件偵測發展史 4
2-1-1 2013聲音場景與事件的分類與偵測競賽 5
2-1-2 2016至2019聲音場景與事件的分類與偵測競賽 6
2-2 聲音事件偵測特徵 7
2-2-1 對數梅爾刻度頻譜 8
2-2-2 梅爾倒頻譜係數 9
2-3 聲音事件偵測困難 11
第三章 神經網路與深度學習 12
3-1 類神經網路 12
3-1-1 類神經網路發展史 13
3-1-2 多層感知機 16
3-2 深度學習 21
3-2-1 深度神經網路 21
3-2-2 摺積神經網路 22
3-3 膠囊神經網路 25
3-3-1 膠囊神經網路vs摺積神經網路 25
3-3-2 Vector in Vector out 27
3-3-3 動態路由 29
3-3-4 膠囊網路架構 31
3-3-5 膠囊神經網路之聲音事件偵測 34
第四章 提出之架構 36
4-1 特徵提取 36
4-2 基於聲音事件偵測之膠囊神經網路架構 37
4-3 摺積和池化涵蓋範圍 39
4-4 平行膠囊神經網路架構 41
第五章 實驗與分析 46
5-1 實驗環境與資料庫 46
5-2 評分機制 49
5-3 實驗結果比較與分析 51
第六章 結論與未來展望 55
參考文獻 56
參考文獻 [1] D. Wang and G. J. Brown, “Computational Auditory Scene Analysis: Principles, Algorithms, and Applications”, USA, NJ, Piscataway:IEEE Press, 2006.
[2] A. S. Bregman, “Auditory Scene Analysis,” MIT Press, Cambridge, MA, 1990.
[3] M. Slaney, “The History and Future of CASA,” Speech separation by humans and machines, pp.199-211, Springer US, 2005.
[4] N. Sawhney, “Situational Awareness from Environmental Sounds,” Technical Report, Massachusetts Institute of Technology, 1997.
[5] D. Barchiesi, D. Giannoulis, D. Stowell, M. D. Plumbley, “Acoustic Scene Classification,” in IEEE Signal Processing Magazine, vol. 32, no. 3, pp.16-34, May 2015.
[6] S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” in Proceedings of the 31st Conference on Neural Information Pro-cessing Systems, pp. 3859–3869.
[7] Y. C. Wu, P. C. Chang, C. Y. Wang and J. C. Wang, “Asymmetric Kernel Convolutional Neural Network for acoustic scenes classification, ” in 2017 IEEE International Symposium on Consumer Electronics (ISCE), Kuala Lumpur, Malaysia, Nov. 2017.
[8] R. Stiefelhagen, and J. Garofolo, eds, “Multimodal Technologies for Perception of Humans,” First International Evaluation Workshop on Classification of Events, Activities and Relationships, CLEAR 2006, Southampton, UK, April 6-7, 2006, Revised Selected Papers. Vol. 4122. Springer, 2007.
[9] D. Giannoulis, E. Benetos, D. Stowell, and M. D. Plumbley, IEEE AASP CASA Challenge - Public Dataset for Scene Classification Task, retrieved Jun. 29, 2017.
[10] D. Giannoulis, E. Benetos, D. Stowell, and M. D. Plumbley, IEEE AASP CASA Challenge - Private Dataset for Scene Classification Task, retrieved Jun. 29, 2017.
[11] D. STOWELL, et al. “Detection and classification of acoustic scenes and events,” IEEE Transactions on Multimedia, 17.10: 1733-1746, 2015.
[12] A. Mesaros, T. Heittola, T. Virtanen, “TUT database for acoustic scene classification and sound event detection,” in IEEE 2016 24th European Signal Processing Conference (EUSIPCO), p. 1128-1132, 2016.
[13] A. MESAROS, et al, “Detection and classification of acoustic scenes and events,” Outcome of the DCASE 2016 challenge. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 26.2: 379-393, 2018.
[14] A. Mesaros, et al, “DCASE 2017 challenge setup: Tasks, datasets and baseline system,” DCASE 2017-Workshop on Detection and Classification of Acoustic Scenes and Events, 2017.
[15] A. Mesaros, T. Heittola, and T. Virtanen, “A multi-device dataset for urban acoustic scene classification,” in IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE), 2018.
[16] ETSI Standard Doc., “Speech Processing, Transmission and Quality Aspects (STQ); Distributed Speech Recognition; Front-End Feature Extraction Algorithm; Compression Algorithms,” ES 201 108, v1.1.3, Sep. 2003.
[17] ETSI Standard Doc., “Speech Processing, Transmission and Quality Aspects (STQ); Distributed Speech Recognition; Front-End Feature Extraction Algorithm; Compression Algorithms,” ES 202 050, v1.1.5, Jan. 2007.
[18] Librosa: an open source Python package for music and audio analysis, https://github.com/librosa, retrieved Dec. 1, 2016.
[19] Librosa: an open source Python package for music and audio analysis, https://github.com/librosa, retrieved Dec. 1, 2016.
[20] S. J. Russell, and P. Norvig. “Artificial intelligence: a modern approach. Malaysia,” Pearson Education Limited, 2016.
[21] W. S. Mcculloch and W. Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics, vol.5, no.4, pp.115-133, Dec. 1943.
[22] D. O. Hebb, “Organization of Behavior,” New York: Wiley & Sons.
[23] F. Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” Cornell Aeronautical Laboratory, Psychological Review, v. 65, no. 6, pp. 386–408.
[24] M. Minsky and S. Paper, “Perceptrons,” Cambridge, MA: MIT Press.
[25] N. Srivastava, G. E. Hinton, A. Krizhevsky, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Machine Learning Research, vol. 15, pp. 1929-1958. Jun. 2014.
[26] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
[27] I. Mrazova, and M. Kukacka, “Hybrid convolutional neural networks,” in 6th IEEE International Conference on Industrial Informatics (INDIN), 2008.
[28] K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations (ICLR), 2015.
[29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Pro-ceedings of the IEEE Conference on Computer Vision and Pattern Recog-nition (CVPR), pp. 1-9, 2015.
[30] K. He, Zhang, X., S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
[31] L. Deng, “The MNIST database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Processing Magazine 29.6 : 141-142, 2012.
[32] T. Tieleman, “affNIST,” URL https://www.cs.toronto.edu/~ tijmen/affNIST/, Dataset URL https://www.cs.toronto.edu/~tijmen/affNIST/. [Accessed on: 2018-05-08], 2013.
[33] F. Vesperini, et al. "Polyphonic Sound Event Detection by using Capsule Neural Networks." IEEE Journal of Selected Topics in Signal Processing, 2019.
[34] TensorFlow: an open source Python package for machine intelligence, https://www.tensorflow.org, retrieved Dec. 1, 2016.
[35] J. Dean, et al. “Large-Scale Deep Learning for Building Intelligent Computer Systems,” in Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, pp. 1-1, Feb. 2016.
[36] A. Mesaros, T. Heittola, and T. Virtanen, “Metrics for polyphonic sound event detection,” Applied Sciences, 6(6):162, 2016.
[37] S. Adavanne , and T. Virtanen, “A report on sound event detection with different binaural features,” arXiv preprint arXiv:1710.02997 , 2017.
指導教授 張寶基(Pao-Chi Chang) 審核日期 2019-6-18
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明