中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/81150
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78937/78937 (100%)
Visitors : 39619133      Online Users : 323
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/81150


    Title: 輕量化卷積神經網路的車門開啟防撞警示;Collision warning for car door opening with a light convolutional neural network
    Authors: 李佩瑩;Lee, Pei-Ying
    Contributors: 資訊工程學系
    Keywords: 車門開啟防撞警示系統;卷積神經網路;DOW;CNN
    Date: 2019-07-24
    Issue Date: 2019-09-03 15:37:13 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 由於台灣汽機車數量逐年攀升,且人口密度高、道路窄小和停車位不足的問題,人、車爭道和兩車併排等現象層出不窮,使得在路邊停靠而下車開門時未注意後方來車造成碰撞的傷亡事故屢屢發生,因此如何防範不當開車門肇禍已成為目前重要的研究議題。在本論文中,我們提出一個基於輕量化卷積神經網路 (convolution neural network, CNN) 的車門開啟防撞警示系統,以相機作為感測器監測後方的行人、自行車、機車及汽車,在可能碰撞前警示駕駛人,保障駕駛、乘客與其他用路人的安全。
      本論文分為兩個部分:第一部分為輕量化的卷積神經網路,使用 MobileNet V2 width1.6 取代原先 YOLOv3 中的 Darknet-53 架構作為特徵提取器,藉此減少網路執行時所需耗費的計算量和參數儲存空間,再透過 YOLOv3 中類似特徵金字塔網路 (feature pyramid networks, FPN) 以三種不同尺寸的特徵圖做後方移動物件的偵測與辨識;第二部分則是利用第一部分輸出的移動物件座標與類別,以俯瞰轉換將原始影像轉成平行於地面的虛擬影像平面,進而計算出縱向 (latitude)、橫向 (longitude) 的距離及估計碰撞時間 (time to collision, TTC) 作為警示的依據。
      在實驗中,我們所使用的YOLOv3-MobileNet V2 width1.6架構相較於YOLOv3降低約2.45倍的參數量及3.24倍的計算量。以960×540解析度的影片測試,平均執行速度為每秒 28 張影像,物件偵測系統的mAP達到88.43%。;In most cities, the traffic condition is always crowded and chaotic. Sometimes, the drivers are hard to stop their cars away from the moving stream of roads, or less-morality drivers arbitrarily stop their cars to get off. In these situations, the abrupt car door opening may be collided by the following cars or autobikes.
    To avoid the collision, we here propose a “car-door-open warning system” based on a constructed light convolutional neural network combining a location estimator. At first, a light convolutional neural network is constructed to detect and recognized the moving objects in the images; the objects include the coming cars, autobikes, pedestrians, and other moving objects. The modified MobileNet V2 instead of the original Darknet-53 in YOLOv3 was used for shrinking the amount of computation and network parameters. Then, we transform the locations of detected objects from the coordinate system of captured images into the coordinate system of the transformed top-view images to estimate the relative locations of the coming objects.
    To evaluate the performance of the proposed system, several experiments and comparisons were conducted and reported. Based on 3164 images, the mAP of the object detection system can reach 88.43%, and the average execution speed on 960×540 images is 28 frames per second. Comparing to the original YOLOv3, we reduced the parameter and computation amount by 2.45 times and 3.24 times, respectively.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML188View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明