English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41638343      線上人數 : 1708
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77632


    題名: MFNet:基於點雲與RGB影像的多層級特徵融合神經網路之3D車輛偵測;MFNet: 3D Vehicle Detection Based on Multilevel Fusion Network of Point Cloud and RGB Images
    作者: 吳丞鎬;Wu, Cheng-Haw
    貢獻者: 資訊工程學系
    關鍵詞: 自動駕駛;3D物件偵測;LiDAR;深度學習;ADAS;Autopilot;3D Object Detection;LiDAR;Deep Learning;ADAS
    日期: 2018-07-26
    上傳時間: 2018-08-31 14:50:50 (UTC+8)
    出版者: 國立中央大學
    摘要: 在交通工具逐漸普及的時代,自動駕駛被期許能改善交通壅塞與提供更多安全性,逐漸成為目前各界熱切研究的關鍵技術,例如先進駕駛輔助系統(ADAS)。自動駕駛的核心軟體功能大致可以分為三類:感知、規劃和控制,其中感知是指自動駕駛系統收集環境中的各類訊息,並從訊息中提取相關知識的能力。本篇論文關注於環境感知中確認車輛的偵測和定位能力。
    傳統的電腦視覺領域,大部分物件偵測問題都是基於二維方式來研究。而近年隨著人們逐漸理解二維數據的侷限性,以及三維感測器如雙鏡頭相機、LiDAR等設備成本的降低,基於三維的物件偵測問題開始被重視。3D物件偵測可以取得物體的距離訊息和三維座標,並且能藉由感測器資料克服影像辨識中光線、角度和色差等問題,本篇論文的研究目標即是基於LiDAR資料與RGB影像的3D物件(車輛)偵測模型。
    針對自動駕駛情境的高精確率3D車輛偵測,我們提出了多層級特徵融合網路(Multilevel Fusion Network,MFNet),這是一個將神經網路的跨層特徵重複利用並融合的深度學習模型,以LiDAR點狀雲和RGB影像作為輸入,藉由Encoder-Decoder網路擷取高解析度特徵圖,將其使用於RPN(Region Proposal Network)構成的初步融合網路與模型後半的高層融合網路,最後預測出多類別(車輛與行人)的機率與3D Bounding Box。
    以著名的自動駕駛資料集KITTI為基準的實驗結果表明,我們的方法在3D物件偵測和鳥瞰圖評估都有良好的表現,尤其在高遮擋物件的困難級別評估有突出的平均AP值(mAP),並且處理速度高達約11 FPS,接近實時運算,快於近年的3D車輛偵測模型。;In an age when transport is becoming more common, people expect that automated driving can improve traffic congestion and provide more security. This has gradually become a key technology in the current zealous research, such as advanced driver assistance systems (ADAS). The core functions of automatic driving can be roughly divided into three categories: perception, planning and control. Perception refers to the ability of the automated driving system to collect various types of information in the environment and extract relevant knowledge from the messages. Our paper focuses on the recognition of vehicles′ detection and positioning capabilities in environmental perception.
    In the field of computer vision, most object detection problems are based on two-dimensional methods. In recent years, as people gradually understand the limitations of two-dimensional data and the cost reduction of three-dimensional sensors such as dual-lens cameras and LiDAR, 3D Object Detection has begun to receive attention. The purpose of 3D object detection is to obtain the distance information and 3D coordinates of the object, and to overcome the problems of light, angle, and color difference in image recognition by the sensor data. The research goal of this paper is the 3D object (vehicle) detection model based on LiDAR data and RGB images.
    For high-precision 3D vehicle detection in the context of automated driving, we propose the MFNet (Multilevel Fusion Network). MFNet is a deep learning model that reuses and fuses cross-layer features of neural networks. It uses LiDAR point clouds and RGB images as input, and extracts high-resolution feature maps through an Encoder-Decoder network. It uses features to the Initial Fusion Network and High-level Fusion Networks formed by RPN (Region Proposal Network), and finally predicts the probability of multiple categories (vehicles and pedestrians) and 3D Bounding Box.
    The experimental results based on the famous automatic driving data set KITTI show that our method has a good performance in 3D Object Detection and Bird′s-Eye View evaluation, especially in the Hard level evaluation of high obstructive objects with outstanding average AP values (mAP). MFNet′s processing speed is up to about 11 FPS, which is close to real-time computing, faster than recent 3D vehicle detection models.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML123檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明