中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/77418
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 40890051      線上人數 : 1109
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77418


    題名: 結合影像與音訊深度學習之影片描述技術;Video Caption Technique Based on Joint Image - Audio Deep Learning
    作者: 廖沛欣;Liaw, Pei-Sin
    貢獻者: 通訊工程學系
    關鍵詞: 影片描述;聲音場景辨識;聲音事件偵測;摺積神經網路;長短期記憶人工神經網路;詞嵌入;Video Caption;Sound event detection;Acoustic scene classification;Convolutional Neural Networks;Long Short - Term Memory;Word Embedding
    日期: 2018-08-01
    上傳時間: 2018-08-31 14:37:35 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著科技的進步,由20世紀開始,人們希望電腦與人類同樣擁有學習能力,發展出AI (Artificial Intelligence) 人工智慧,隨著大量的研究人員投入人工智慧的相關研究,逐漸發展出機器學習 (Machine Learning) 以及後來的深度學習 (Deep Learning)等技術來讓電腦學習如何對資料進行判斷與分類。近年來最多人使用類神經網路來對大量的資料進行學習,進而發展出許多針對不同資料形式的神經網路架構。
    本論文利用多個不同的類神經網路,針對影片的影像、聲音以及描述影片的文字敘述進行特徵提取,其中影像及聲音的特徵提取是使用摺積神經網路 (Convolutional Neural Network, CNN),而文字方面則是先透過編碼將文字轉為數字表示,再使用詞嵌入 (Word Embeddings) 的方法,讓每個字與字之間存在一些連續的關係。最後合併影像以及聲音特徵輸入語意構成網路作為網路初始化,並使用語意特徵作為增強學習矩陣,透過訓練學習如何使用自然語言對影片進行描述。
    透過語言的自動評分機制對語意構成網路輸出的句子進行評分,我們發現加入聲音特徵對整個語意構成網路是有幫助的。在各項語言的評分機制中顯示,當我們在影像特徵中加入聲音事件以及聲音場景特徵時,相較於傳統只使用影像特徵進行學習,增加聲音特徵的網路所輸出的影片描述語句在各項評比中的得分都有提升。在BLEU評分中,我們同時考慮了單詞組長度由一個字到四個字,每項評分皆有1%以上的提升,Cider-D評分中則有高達2.27%的提升,另外在Meteor與Rouge-L評分中,也有0.2%與0.7%的提升,效果十分顯著。;With the advancement of technology, starting from the 20th century,we hope that com-puters have the same learning ability as humans.With a large number of researchers investing in artificial intelligence gradually developed technologies, such as machine learning and deep learning to let computers learn how to make decision and classifications.In recent years, most people use neural networks to learn a large amount of data, and then develop many neural network architectures for different data forms.
    This thesis uses different neural networks to extract features from the image, sound and semantic.The feature extraction of images and sounds is based on the convolutional neural network (CNN). The semantic first is to use coding to change the text into a digital perfor-mance, and then use the word embedding method to make there a continuous relationship between each word and the word.Finally, concatenation of image and audio features is fed into the LSTM to initialize the first step, which is expected to provide an overview of the video content.
    Using the automatic scoring mechanism of the language to score the sentences that the semantics constitute the network output, we find that adding the sound features is helpful for the entire semantics to form the network. In the scoring mechanism of each language, when we add sound events and sound scene features to the image features, compared to use of im-age features only, the video descriptions output by the network that adds sound features are The scores in the item ratings have improved. In the BLEU score, we also considered the length of the word group from one word to four words, each of which has an increase of more than 1%, and the Cider-D score has an increase of 2.27%, and the Meteor and the Rouge-L score, there was also an increase of 0.2% and 0.7%, and the effect was very significant.
    顯示於類別:[通訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML243檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明