博碩士論文 108521110 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator謝易軒zh_TW
DC.creatorI-Hsuan Hsiehen_US
dc.date.accessioned2021-7-6T07:39:07Z
dc.date.available2021-7-6T07:39:07Z
dc.date.issued2021
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=108521110
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract本論文旨在設計一穿戴式導盲裝置,建立一系列的演算法,引導視障人士於戶外行動,以確保其安全的行走在人行道與斑馬線上,並且確保視障者能夠安全的通過馬路,除此之外,當抵達之目的地設定為四大超商時,能夠辨識超商招牌並引導視障者走近超商的門口前。硬體的部分使用Nvidia推出的Jetson AGX Xavier嵌入式系統板,搭配Stereolabs ZED 2雙目攝影機組合而成,將ZED 2安裝在使用者配戴的鴨舌帽沿上,並將嵌入式系統板與其它硬體配件固定於木板上,讓使用者背在後面,採用一顆5200mAh的鋰電池為電力供給,能夠提供約連續2.5小時的使用。軟體的部分使用經過預訓練的Fast-SCNN語義分割模型,結合視障者正前方影像的語義分割結果,以及ZED2輸出的深度圖,設計一可行走區域檢測演算法,將畫面切分成七個方向,計算不同方向的可行走程度,選擇出一個可行走方向,並輸出語音提示,引導使用者行走較為空曠安全的方向的人行道或斑馬線上,同時實現避障的功能。在使用者通過馬路前,使用Invignal平台提供之API來讀取紅綠燈狀態,以語音提示的方式讓使用者可以知道紅綠燈的燈號狀態以及剩餘秒數,引導使用者安全行走斑馬線通過馬路。當目的地設定為四大超商時,利用經過預訓練之YOLOv5模型,結合超商招牌的辨識結果與ZED2的地磁資訊,將招牌從影像畫面中的位置轉換到實體空間中的地磁方位,計算使用者與目標的朝向角與距離,以不同頻率的音效提示使用者靠近超商。研究結果顯示,根據本論文所提出的避障引導機制,能夠正確且安全的在人行道與斑馬線上行走,且確保在排除語音提示的延遲下,本演算法每秒可以處理至少6幀以上的畫面。在實際與視障者實驗後,證實本研究所提出的方法能夠正確的指引視障者行走在人行道及斑馬線上,且正確且成功地避開障礙物,紅綠燈號狀態提示有效增加視障者通過路口時的安全性,最後能安全地抵達所設定之目的地。zh_TW
dc.description.abstractThis study aims to design a wearable device for visually impaired and establish a series of algorithms for outdoor walking. The main purpose is to guide the visually impaired to walk on sidewalks or crosswalks correctly, and ensure that visually impaired can pass through intersections safely. The hardware of this study uses the Jetson AGX Xavier embedded system board combined with the Stereolabs ZED 2 RGB-D camera. The camera ZED 2 is mounted on the hat brim and the embedded system board and other hardware accessories are carried on the back of the user. Furthermore, a 5200mAh lithium battery is used as the power supply of the device and ensure 2.5 hours running time. The software is a pre-trained Fast-SCNN semantic segmentation model. By combining with the segmentation result and the depth map obtained by ZED 2, a walkable area detection algorithm is proposed to calculate the walkability confidence degree for the image in front of the visually impaired person. Then, according to the result of the walkability confidence degree, the guiding voice is given to guide the suitable forward direction to the user. In addition, when the user is passing a crosswalk, the API provided by the Invignal platform is used to learn of the traffic light status and provide the seconds of red light such that the user can pass the crosswalk safely. Finally, a pre-trained YOLOv5 model is applied to recognize the signboards of convenience stores. Based on the magnetometer measurement by ZED 2 and the image in the screen so that we can calculate the user position relative to the convenience store and guide the visually impaired person approach the convenience store gate. Finally there was an experiment with a visually impaired person, the experiment showed that the proposed algorithm and the wearable device are effective to guide the visually impaired people to walk on sidewalks correctly and passed through the crosswalk safely.en_US
DC.subject穿戴式裝置zh_TW
DC.subject導盲輔具zh_TW
DC.subject深度學習zh_TW
DC.subject電腦視覺zh_TW
DC.subject避障引導zh_TW
DC.subjectwearable deviceen_US
DC.subjectvisually impaireden_US
DC.subjectdeep learningen_US
DC.subjectcomputer visionen_US
DC.subjectobstacle avoidanceen_US
DC.title基於AI技術之視障人士的行進避障及超商辨識與引導zh_TW
dc.language.isozh-TWzh-TW
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明