中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/83929
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41650610      線上人數 : 1404
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/83929


    題名: 基於圖卷積網路的自動門檢測;Automatic Door Detection based on Graph Convolution Network
    作者: 羅昌威;Loakhajorn, Chanwit
    貢獻者: 資訊工程學系
    關鍵詞: 深度學習;電腦視覺;圖卷積網路;自動門偵測;Deep Learning;Computer Vision;Graph Convolution Network;Door Detection
    日期: 2020-07-14
    上傳時間: 2020-09-02 17:42:49 (UTC+8)
    出版者: 國立中央大學
    摘要: 中文摘要
    這項研究可以引導機器人找到入口或自動門,也可以引導盲人從遠處到自動門附
    近。如今,物件檢測模型的效果已經非常強大,高準確度,高幀率,但主要問題是難以
    區分便利商店的自動門和玻璃,因此我們使用圖卷積網絡(GCN)來提高準確度。我們
    的想法是使用 GCN 模型並透過周遭的物件來辨識自動門。這個系統分為兩個部分,物
    件偵測與物件關聯。物件偵測的部分,我們使用高準確度及高幀率的現有模型。YOLOv4
    是今年提出最佳的物件偵測模型,但僅使用 YOLOv4 還是會有大量誤判的狀況,因此需
    要透過我們提出的方法來改善。物件關聯部分我們提出了結合 GCN 的全連接層。接著,
    在開始物件關聯前,我們需要先將物件偵測的結果轉成圖的架構。我們在 GTX 1080 上
    進行訓練,並在 AGX 上測試模型。我們的資料集是一個自製的資料集,我們從 Google
    街景中收集並錄製位於台灣的街景影片。該資料集由 100 多家便利商店組成,其中包含
    內部和外部環境,我們可以透過 GCN 來減少誤偵測的狀況。在我們的測試資料集中獲
    得了 86%的準確度。實際以影片或在真實環境測試,我們的模型可以達到 5 FPS 左右,
    證明我們提出的模型可以找到自動門。為了證實我們的模型可以解決問題,我們展示了
    實驗部分的結果與模型的運作方式;Abstract
    This research purpose is to navigate robots to find doors and help blind people find
    entrances and exits even across a room. Nowadays, the object detection model is very
    powerful, high accuracy, and high frame rate but the main problem is hard to distinguish
    between glass doors and glass walls in convenience stores. To solve this problem, we use
    Graph Convolutional Network (GCN) to improve accuracy results. The idea is we use the
    GCN model to identify the entrance by using surrounding objects. The system consists of two
    parts, object detector and association part. For the object detector part, we use the advantage
    of public models for high accuracy and frame rate per second. The YOLOv4 is the new model
    this year and is state of the art compared with the previous model. But YOLOv4 still has
    wrong detection so it needs to use our proposed model to fix it. For the association part is our
    proposed model, Fully Connected Layer (FC) combined with GCN. Then we need to convert
    the output of the object detector to graph structure first before input to association. We train
    on GTX 1080 and test the real-time models on the AGX broad. My dataset is custom,
    collecting from google street view and recording video from Taiwan. The dataset consists of
    more than 100 convenience stores, including inside and outside environments. We can reduce
    some wrong detection from the object detector to achieve this. We get test accuracy 86
    percent from our test set. However, we are testing on the video to demo and it showed our
    result gets fps around 5 frames and it can prove our proposed model can find the doors. To
    confirm our model over the problem, we demonstrate in the experiment part and showed how
    the model works.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML103檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明