摘要: | 根據世界衛生組織2019 年的統計全球約有22 億人有視覺障礙的問 題,而在台灣視覺障礙者約有五萬六千餘人,對於視覺障礙者而言,要 自主在陌生環境中移動是相當困難的,而傳統的輔助裝置像是白手杖、 導盲犬等都有其不方便或是普及的困難之處,所以本論文想使用立體視 覺的攝影機與深度學習的演算法協助視障者避開障礙物、偵測路牌,輔 助他們在陌生的環境中行走。 本論文包含:(1) 開發離線版的室內導盲輔助裝置、(2) 使用MobileNet 偵測路面情況、(3) 使用YOLO、CRAFT、CRNN 三個模型解析路牌資 訊,協助障者在室內的公共場所移動。 路面偵測的實驗雖然DenseNet(94.58%) 效果優於MobileNet(93.53%), 但是考量硬體裝置,使用參數量較少的MobileNet 更加適合。而使用 YOLO 偵測路牌的實驗,當IOU>0.5 的mAP 為90.07%,已經能透過路 牌偵測協助障者移動。;According to the World Health Organization's statistics in 2019, there are approximately 2.2 billion people worldwide with visual impairment issues, and there are about 56,000 visually impaired people in Taiwan. For visually impaired people, it is quite difficult to move independently in unfamiliar environments, and traditional aids such as white canes and guide dogs have their own difficulties or difficulties in popularization. Therefore, this paper proposes to use a stereoscopic camera and deep learning algorithm to assist visually impaired people in avoiding obstacles, detecting road signs, and assisting them in walking in unfamiliar environments. This paper includes: (1) developing an offline indoor navigation aid, (2) using MobileNet to detect road conditions, and (3) using three models, YOLO, CRAFT, and CRNN, to analyze road sign information and assist the visually impaired in moving around public indoor spaces. Although DenseNet (94.58%) performed better than MobileNet (93.53%) in road detection experiments, MobileNet with fewer parameters is more suitable considering hardware devices. In the experiment of using YOLO to detect road signs, when IOU>0.5, the mAP is 90.7%, which can already detect road signs to assist the visually impaired in moving around. |