中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/74740
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41660440      線上人數 : 1715
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/74740


    題名: 基於物件遮罩與邊界引導多尺度遞迴卷積神經 網路之語意分割;Semantic Segmentation Multi Scale Recurrent Convolutional Neural Network Based On Object Mask and Boundary Guided
    作者: 王冠中;Wang, Kuan-Chung
    貢獻者: 資訊工程學系
    關鍵詞: 卷積神經網路;語意分割;Convolutional Neural Network;Semantic Segmentation
    日期: 2017-08-18
    上傳時間: 2017-10-27 14:37:58 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,深度學習作為機器學習的分支,也扮演著人工智慧的一個重要角色,其中以卷積神經網路(Convolutional Neural Network, CNN)在影像辨識上相較於傳統的分類方法有著突破性的表現。全卷積神經網路(Fully Convolutional Network, FCN)[10]的出現也使得影像語意分割相關的研究蓬勃發展,比起以往根據圖片紋理顏色等內容的聚類方式,加入了語意資訊訓練進而提升語意分割的準確率。本論文結合了兩個網路的優勢,一種基於物件邊緣引導方式,以強化邊緣及物件本身的完整性,另一種負責對影像語意分割的預測,提出一個端對端訓練的網路架構。
    本論文所提出的架構改良了 DT-EdgeNet (Domain Transform with EdgeNet)[11]。在此,我們結合 OBG-FCN[12]遮罩網路取代[11]邊緣網路,使用的遮罩網路能預測背景、物件以及物件邊緣參考圖。另外我們提出架構使用多尺度 ResNet-101 當作基底網路,同時引入多尺度Atrous Convolution使用並接的方式結合至本架構訓練,藉此保有特徵圖的尺度,除了可以增加感知域,又可進一步的提升語意分割的準確度。
    在實驗上,我們於 VOC2012 測試集上取得高辨識率外。此外,我們使用 Faster RCNN 提取物件邊框與提出架構語意分割結果結合作為延伸應用,以進行實例級別分割(instance-level segmentation)。;In recent years, as a branch of machine learning, deep learning play an important role in Artificial Intelligence, which Convolutional Neural Network (CNN) has a breakthrough Performance in the image classification when comparing with traditional classification methods. The emergence of the full Convolutional Network (FCN)[10] also makes the study of image semantic segmentation flourish. In contrast to past work clustering according to the image texture and color, FCN joined the training of semantic information to improve the accuracy of semantic segmentation. Our paper combines the advantages of two networks, an object boundary based approach to strengthen the integrity of edge and the object itself, and the other is responsible for the prediction of image semantic segmentation, proposed an end-to-end training network architecture.
    In this paper, proposed architecture improves the DT EdgeNet (Domain Transform with EdgeNet)[11]. Here, we combined the OBG-FCN [12] mask network and replaced the [11] edge network. The used mask network can predict background, object, and object edge reference diagrams. In addition, our architecture uses multi-scale ResNet-101 as the base network and introduces multi-scale Atrous Convolution to architecture training to preserve the dimensions of the feature map, which increases the receptive and further to enhance the accuracy of semantic segmentation.
    In the experiments, we got the high performance of recognition on the VOC2012 test set. In addition, we combined extraction of object bounding box generated by Faster RCNN and result of proposed semantic segmentation as an extension application for instance-level segmentation.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML323檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明