中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/84038
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 41246667      在线人数 : 442
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/84038


    题名: 深度學習的3D物件偵測、辨識、 與方位估計;3D Object detection, recognition, and position estimation using CNN
    作者: 陳世翔;Chen, Shi-Xiang
    贡献者: 資訊工程學系
    关键词: 3D 物件偵測;方位估計;四元數;物件偵測;6個自由度;3D Object detection;position estimation;quaternion;Object detection;6 degree of freedom
    日期: 2020-07-28
    上传时间: 2020-09-02 17:57:49 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,深度學習技術的快速崛起,使得它在物件偵測與辨識的應用也漸趨成熟;物件偵測的技術也逐漸的擴展到3D應用層面;例如,自駕車、虛擬實境、擴增實境、機器手臂。3D偵測要使用3D影像,3D影像相較於2D影像多了深度資訊,然而3D物件偵測因多了深度資料而變得更困難;例如,有效擷取深度影像特徵、處理更複雜的高維度資料、物體之間的混雜和遮擋、更複雜的場景等等。在本研究中,我們提出一個可直接估計3D物件位置、方向、與大小的卷積神經網路 (convolution neural network, CNN);透過輸入RGB與深度影像,卷積神經網路擷取特徵並預測物體的類別、姿態、和位置,最後輸出3D邊界框 (bounding box)。
    本研究所使用的卷積神經網路模式是改自於有名的2D偵測網路YOLOv3。我們的主要改進分兩部份,一是修改YOLOv3的輸入端,使用RGB與深度影像作為輸入,且將YOLOv3 中的 Darknet-53 架構加入通道注意力 (channel attention) 強化擷取特徵能力,並使用這些特徵進行多尺度的偵測與辨識;二是物件的3D位移分量藉由物件中心與相機的距離來估計,並修改損失函數 (loss function) 加入四元數 (quaternion) 估計物件的3D旋轉分量,最後預測出多類別的物件機率與三維座標、方向及大小尺寸,並輸出3D邊界框。
    在實驗中,我們將YOLOv3修改為6DoF YOLO,使網路預測3D邊界框,在(Falling Thing)資料庫下,使用了20854張影像,其中90%為訓練樣本,其餘為測試樣本,此物件偵測系統的mAP為89.33%,經過一連串改動與實驗分析後,我們最終使用的6DoF SE-YOLO架構,此架構增加約1.014倍的參數量及1.002倍的計算量,影像以416×416解析度進行測試,平均執行速度為每秒35張影像,mAP達到93.59%。
    ;According to rising of deep learning technology, its application in object detection and recognition gradually mature recently. Object detection technology has gradually developed to the 3D application. For example, self-driving cars, virtual reality, augmented reality, and robotic arms. 3D images have depth information, but 2D images haven’t. 3D object detection becomes more difficult due to the depth data. For example, depth image features extracted effectively, complex high-dimensional data handled, object occluded each other, scenes clutter, etc. In our research, we propose a convolution neural network (CNN) that can estimate directly the position and size of 3D objects. After input RGB and depth images extracts features, model outputs 3D bounding boxes.
    In our research, model adapted from the famous 2D detection network YOLOv3. We made two improvements of model. First, we modify the input which use RGB and depth images. We use channel attention to enhance the ability to extract features. These features used for multi-scale detection and identify. Second, we estimated the 3D translation by localizing object center in the image and estimating distance object distance from the camera. We add quaternion to the loss function that can estimate the 3D rotation. Our model can predict 3D bounding box which contain the object class, 3D coordinate, position and size.
    In the experiment, we modified YOLOv3 to 6DoF YOLO which can predict the 3D bounding box. There are 20854 images in (Falling Thing) dataset, 90% of which are training data and the others are test data. 6DoF YOLO get 89.33% mAP. After experimental analysis, we finally use the 6DoF SE-YOLO architecture. This architecture increases the parameter calculation amount by 1.014 times and 1.002 times, respectively. Our model can reach 93.59% mAP, and the average execution speed on 416×416 images is 35 frames per second.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML120检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明