博碩士論文 105522125 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator謝鎧楠zh_TW
DC.creatorKai-Nan Hsiehen_US
dc.date.accessioned2018-8-1T07:39:07Z
dc.date.available2018-8-1T07:39:07Z
dc.date.issued2018
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=105522125
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract汽車成為人們依賴的交通工具後也衍生出許多相關車輛的意外事件。因為駕駛未注意到車輛後方的情況而造成稻車時的碰撞事故是經常發生的事故之一。為了減少這類事故的發生,透過電腦視覺偵測與辨識的技術來了解車輛後方的情況,以提醒駕駛注意車輛後方的安全。最近由於卷積神經網路 (CNN) 的發展使得電腦視覺上在偵測與辨識的能力比以往有更高的準確率及穩定性。透過深度學習的方式來訓練電腦視覺系統找出可能會在倒車時造成危險的物件並利用深度資訊了解該物件與車輛的距離來了解是否可能發生碰撞以警示駕駛,透過 3D 相機 (3D camera) 取得深度資訊來輔助作為判斷影像中障礙物的資訊來識別是否有會造成倒車意外的立體物件存在於影像當中。 由於 3D 相機 Kinect 的彩色相機模組與深度相機模組在位置上及視野 (FOV) 範圍不同,我們必須先將所拍攝到的彩色影像與距離影像透過 Kinect SDK 進行位置校正避免我們後續在準備訓練資料時框選的位置有太大的差異導致訓練誤差。訓練資料準備完畢之後我們修改更快速區域卷積神經網路 (Faster Regions with Convolutional Neural Networks, Faster R-CNN) 的資料輸入端使卷積神經網路可以接受深度影像及彩色影像的四維影像 (RGB-D) 輸入。我們的實驗包含不同的輸入影像:彩色影像、深度影像及彩色與深度影像的四維影像 (RGB-D) 來進行障礙物偵測及兩種不同的卷積網路神經架構對彩色影像及距離影像提取特徵的方式來比較結果,找出障礙物之後透過我們所使用的深度影像資訊計算出車輛與其距離。我們最後實驗的結果顯示對於四維影像的特徵提取方式效果最佳的是對彩色影像及深度影像透過不同的卷積層分別提取特徵圖,卷積層分別提取出特徵圖後將彩色影像的特徵圖及深度影像的特徵圖進行串接,串接結果輸入全連接層進行最後的偵測及辨識。zh_TW
dc.description.abstractCar accident happens frequently after becoming the most popular transportation devices in daily life, and it costs life and properties because of driver’s negligence. Therefore, many motor manufacturing have invested and developed the “Driving Assistant System” in order to promote the safety of driving. Computer Vision (CV) has been adopted due to it’s ability of object detection and recognition. In recent years, Convolutional neural networks (CNN) has dramaticly developed which makes computer vision much more reliable. We train our “Rear obstacle detection and recognizing system” via deep learning model and use data of color image and depth image which received from Microsoft KinectV2. Because of the field of view (FOV) from KinectV2 is different, we calibrate color image and depth image using Kinect SDK in order to decrease the disparity of pixel position. Our detecting and recognizing system is based on Faster R-CNN. Our input data contains two images, and we experiment two different architectures on convolutional neural networks to extract feature maps from input data. One is single feature extractor and single classifier, and the other is two feature extractor and single classifier. Two feature extractor generate the best detection result. Furthermore, we use only color image or depth image as input doing experiments comparing with previous two methods. Finally, after detecting obstacle we use depth image to estimate the distance between vehicle and obstacle.en_US
DC.subject卷積神經網路zh_TW
DC.subject深度與彩色影像zh_TW
DC.subject障礙物偵測zh_TW
DC.subjectFaster R-CNNen_US
DC.subjectRear Obstacle Detectionen_US
DC.title使用深度與彩色影像的卷積神經網路做倒車障礙物偵測zh_TW
dc.language.isozh-TWzh-TW
DC.titleRear obstacle detection using a deep convolutional neural network with RGB-D imagesen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明