博碩士論文 105523027 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:115 、訪客IP:18.223.238.44
姓名 廖沛欣(Pei-Sin Liaw)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 結合影像與音訊深度學習之影片描述技術
(Video Caption Technique Based on Joint Image - Audio Deep Learning)
相關論文
★ 基於區域權重之衛星影像超解析技術★ 延伸曝光曲線線性特性之調適性高動態範圍影像融合演算法
★ 實現於RISC架構之H.264視訊編碼複雜度控制★ 基於卷積遞迴神經網路之構音異常評估技術
★ 具有元學習分類權重轉移網路生成遮罩於少樣本圖像分割技術★ 具有注意力機制之隱式表示於影像重建 三維人體模型
★ 使用對抗式圖形神經網路之物件偵測張榮★ 基於弱監督式學習可變形模型之三維人臉重建
★ 以非監督式表徵分離學習之邊緣運算裝置低延遲樂曲中人聲轉換架構★ 基於序列至序列模型之 FMCW雷達估計人體姿勢
★ 基於多層次注意力機制之單目相機語意場景補全技術★ 基於時序卷積網路之單FMCW雷達應用於非接觸式即時生命特徵監控
★ 視訊隨選網路上的視訊訊務描述與管理★ 基於線性預測編碼及音框基頻週期同步之高品質語音變換技術
★ 基於藉語音再取樣萃取共振峰變化之聲調調整技術★ 即時細緻可調性視訊在無線區域網路下之傳輸效率最佳化研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著科技的進步,由20世紀開始,人們希望電腦與人類同樣擁有學習能力,發展出AI (Artificial Intelligence) 人工智慧,隨著大量的研究人員投入人工智慧的相關研究,逐漸發展出機器學習 (Machine Learning) 以及後來的深度學習 (Deep Learning)等技術來讓電腦學習如何對資料進行判斷與分類。近年來最多人使用類神經網路來對大量的資料進行學習,進而發展出許多針對不同資料形式的神經網路架構。
本論文利用多個不同的類神經網路,針對影片的影像、聲音以及描述影片的文字敘述進行特徵提取,其中影像及聲音的特徵提取是使用摺積神經網路 (Convolutional Neural Network, CNN),而文字方面則是先透過編碼將文字轉為數字表示,再使用詞嵌入 (Word Embeddings) 的方法,讓每個字與字之間存在一些連續的關係。最後合併影像以及聲音特徵輸入語意構成網路作為網路初始化,並使用語意特徵作為增強學習矩陣,透過訓練學習如何使用自然語言對影片進行描述。
透過語言的自動評分機制對語意構成網路輸出的句子進行評分,我們發現加入聲音特徵對整個語意構成網路是有幫助的。在各項語言的評分機制中顯示,當我們在影像特徵中加入聲音事件以及聲音場景特徵時,相較於傳統只使用影像特徵進行學習,增加聲音特徵的網路所輸出的影片描述語句在各項評比中的得分都有提升。在BLEU評分中,我們同時考慮了單詞組長度由一個字到四個字,每項評分皆有1%以上的提升,Cider-D評分中則有高達2.27%的提升,另外在Meteor與Rouge-L評分中,也有0.2%與0.7%的提升,效果十分顯著。
摘要(英) With the advancement of technology, starting from the 20th century,we hope that com-puters have the same learning ability as humans.With a large number of researchers investing in artificial intelligence gradually developed technologies, such as machine learning and deep learning to let computers learn how to make decision and classifications.In recent years, most people use neural networks to learn a large amount of data, and then develop many neural network architectures for different data forms.
This thesis uses different neural networks to extract features from the image, sound and semantic.The feature extraction of images and sounds is based on the convolutional neural network (CNN). The semantic first is to use coding to change the text into a digital perfor-mance, and then use the word embedding method to make there a continuous relationship between each word and the word.Finally, concatenation of image and audio features is fed into the LSTM to initialize the first step, which is expected to provide an overview of the video content.
Using the automatic scoring mechanism of the language to score the sentences that the semantics constitute the network output, we find that adding the sound features is helpful for the entire semantics to form the network. In the scoring mechanism of each language, when we add sound events and sound scene features to the image features, compared to use of im-age features only, the video descriptions output by the network that adds sound features are The scores in the item ratings have improved. In the BLEU score, we also considered the length of the word group from one word to four words, each of which has an increase of more than 1%, and the Cider-D score has an increase of 2.27%, and the Meteor and the Rouge-L score, there was also an increase of 0.2% and 0.7%, and the effect was very significant.
關鍵字(中) ★ 影片描述
★ 聲音場景辨識
★ 聲音事件偵測
★ 摺積神經網路
★ 長短期記憶人工神經網路
★ 詞嵌入
關鍵字(英) ★ Video Caption
★ Sound event detection
★ Acoustic scene classification
★ Convolutional Neural Networks
★ Long Short - Term Memory
★ Word Embedding
論文目次 中文摘要 iv
Abstract v
誌謝 vi
目錄 vii
圖目錄 ix
表目錄 xii
第一章 緒論 1
1-1 研究動機與背景 1
1-2 論文架構 2
第二章 神經網路與深度學習 3
2-1 類神經網路 4
2-1-1 類神經網路發展史 5
2-1-2 多層感知機 8
2-2 深度學習 12
2-2-1 卷積神經網路 13
2-2-2 遞歸神經網路 16
2-2-3 長短期記憶模型 19
第三章 影片描述技術相關文獻 21
3-1 影片描述技術簡介 22
3-2 影像特徵提取技術 23
3-3 語意特徵提取技術 27
3-4 音訊特徵提取技術 28
第四章 提出之架構 31
4-1 系統架構 31
4-2 資料前處理 32
4-3 特徵提取 33
4-3-1 影像特徵提取 33
4-3-2 音訊特徵提取 33
4-3-3 語意特徵提取 34
4-4 訓練階段 35
4-5 測試階段 36
第五章 實驗結果與分析討論 37
5-1 實驗環境與資料庫 37
5-2 評分機制 41
5-3 實驗結果比較與分析 45
第六章 結論與未來展望 53
參考文獻 54
參考文獻 [1] Venugopalan S. , Xu H. , Donahue J. , “Translating videos to natural language using deep recurrent neural networks,” arXiv preprint arXiv:1412.4729, 2014.
[2] M. Minsky, S. Paper, “Perceptrons,” Cambridge, MA: MIT Press.
[3] N. Srivastava, G. E. Hinton, A. Krizhevsky, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Machine Learning Research, vol. 15, pp. 1929-1958. Jun. 2014.
[4] J.J.Hopfield, “Neural networks and physical systems with emergent collective computa-tionalabilities”, Proc. Nut. Acad. Sci., U.S., vol. 79, pp. 2554-2558, Apr. 1982.
[5] Y. Bengio, R. Ducharme, P. Vincent, C. Janvin. “A neural probabilistic language model,” The Journal of Machine Learning Research, 3:1137–1155, 2003.
[6] T. Mikolov, M. Karafia ?t, L. Burget, J. Cˇernocky ?, S. Khudanpu, “Recurrent neural net-work based language model,” Proceedings of Interspeech, 2010.
[7] S.Hochreiter, J.Schmidhuber, “Long short-term memory,” Neural computation, 9(8):1735–1780, 1997.
[8] Z. Wu, Y. G. Jiang, X. Wang, H. Ye, X. Xue, “Multi-stream multi-class fusion of deep networks for video classification,” Multimedia Conference, pp. 791-800, Oct. 2016
[9] S. Venugopalan, M. Rohrbach, R. Mooney, T. Darrell, K. Saenko. “Sequence to sequence video to text,” ICCV, 2015.
[10] Z. Gan, C. Gan, X. He, Y. Pu, K. Tran, J. Gao, L. Carin, L. Deng, “Semantic Composi-tional Networks for Visual Captioning,” CVPR, 2017
[11] Y. C. Wu, P. C. Chang, C. Y. Wang, J. C. Wang, “Asymmetrie Kernel Convolutional Neural Network for acoustic scenes classificationv,” IEEE International Symposium on Consumer Electronics (ISCE), May. 2018.
[12] ETSI Standard Doc., “Speech Processing, Transmission and Quality Aspects (STQ); Dis-tributed Speech Recognition; Front-End Feature Extraction Algorithm; Compression Algo-rithms,” ES 201 108, v1.1.3, Sep. 2003.
[13] ETSI Standard Doc., “Speech Processing, Transmission and Quality Aspects (STQ); Dis-tributed Speech Recognition; Front-End Feature Extraction Algorithm; Compression Algo-rithms,” ES 202 050, v1.1.5, Jan. 2007.
[14] Librosa: an open source Python package for music and audio analysis, https://github.com/librosa, retrieved Dec. 1, 2016.
[15] B. McFee, C. Raffe, D. Liang, D. P. W. Ellis, M. McVicar, E.Battenberg, O. Nieto, “librosa: Audio and Music Signal Analysis in Python,” in Proceedings of the 14th Python in Conference, Jul. 2015.
[16] A. Krizhevsky, I. Sutskever, G. E. Hinton, “ImageNet Classification with Deep Convo-lutional Neural Networks,” Advances in neural information processing systems,pp. 1097-1105,2012
[17] Zeiler, M. D. and Fergus, R. , “Visualizing and Understanding Convolutional Networks,” CoRR, abs/1311.2901, 2013. Published in Proc. ECCV, 2014.
[18] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going deeper with convolutions. CoRR, abs/1409.4842, 2014.
[19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv:1512.03385, 2015.
[20] Tran D, Bourdev L, Fergus R, et al. , “Learning spatiotemporal features with 3d convolu-tional networks,” Proceedings of the IEEE International Conference on Computer Vision. 2015: 4489-4497.
[21] O.Russakovsky,J.Deng,H.Su,J.Krause,S.Satheesh,S.Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015.
[22] A.Karpathy,G.Toderici,S.Shetty,T.Leung,R.Sukthankar, and L. Fei-Fei. Large-scale video classification with convolu- tional neural networks. In CVPR, 2014. 5
[23] S.Venugopalan, H.Xu, J.Donahue, M.Rohrbach, R.Mooney, and K. Saenko. Translating videos to natural language using deep recurrent neural networks. In NAACL, 2015. 1, 2, 5
[24] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. In ICLR, 2015. 6
[25] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollar, and C. L. Zitnick. Mi-crosoft coco captions: Data collection and evaluation server. arXiv:1504.00325, 2015. 6
[26] TensorFlow: an open source Python package for machine intelligence, https://www.tensorflow.org, retrieved Dec. 1, 2016.
[27] J. Dean, et al. “Large-Scale Deep Learning for Building Intelligent Computer Systems,” in Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, pp. 1-1, Feb. 2016.
[28] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv: 1605.02688, 2016. 6
[29] Librosa: an open source Python package for music and audio analysis, https://github.com/librosa, retrieved Dec. 1, 2016.
[30] B. McFee, C. Raffe, D. Liang, D. P. W. Ellis, M. McVicar, E.Battenberg, and O. Nieto, “librosa: Audio and Music Signal Analysis in Python,” in Proceedings of the 14th Python in Conference, Jul. 2015.
[31] S. Guadarrama, N. Krishnamoorthy, G. Malkarnenkar, S. Venugopalan, R. Mooney, T. Darrell, and K. Saenko. YouTube2Text: Recognizing and describing arbitrary activ- ities using semantic hierarchies and zero-shoot recognition. In ICCV, 2013.
[32] M., Annamaria, T. Heittola, and T. Virtanen, “TUT Database for Acoustic Scene Classi-fication and Sound Event Detection,” IEEE 2016 24th European Signal Processing Conference, pp. 1128-1132, Aug. 2016
[33] M. Annamaria, H. Toni, and V. Tuomas, TUT Acoustic scenes 2016, Development da-taset, http://doi.org/10.5281/zenodo.45739, retrieved Dec. 1, 2016.
[34] M. Annamaria, H. Toni, and V. Tuomas, TUT Acoustic scenes 2016, Evaluation dataset, https://zenodo.org/record/165995#.WXblsYiGNhE, retrieved Dec. 1, 2016.
[35] Q. Kong, I. Sobieraj, W. Wang and M. Plumbley, “Deep Neural Network Baseline for DCASE Challenge 2016,” in 2016 Workshop on Detection and Classification of Acous-tic Scenes and Events (DCASE2016), pp. 50-54, Sep. 2016.
[36] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evalua-tion of machine translation,” ACL, 2002.
[37] M. Denkowski and A. Lavie, “Meteor universal: Language specific translation evaluation for any target language,” EACL Workshop on Statistical Machine Translation, 2014.
[38] Ramakrishna Vedantam, C. Lawrence Zitnick, Devi Parikh, “CIDEr: Consensus-based Image Description Evaluation,” CVPR, 2015.
[39] A. H. Abdulnabi, G. Wang, J. Lu, and K. Jia, “Multi-task CNN Model for Attribute Pre-diction,” IEEE Transactions on Multimedia, vol. 17, no. 11, Nov. 2015, pp. 1949-1959
指導教授 張寶基(Pao-Chi Chang) 審核日期 2018-8-1
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明