English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41628933      線上人數 : 3373
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/84117


    題名: 基於深度學習之室內盲人引導輔助系統;A Deep-learning-based Indoor Navigation Assistance System for Visually Impaired Persons
    作者: 王婷;Wang, Ting
    貢獻者: 資訊工程學系
    關鍵詞: 深度學習;導盲系統;室內引導;偵測系統;視障者;deep learning;navigation systems for the blind persons;indoor navigation;detection system;visually impaired
    日期: 2020-08-18
    上傳時間: 2020-09-02 18:06:51 (UTC+8)
    出版者: 國立中央大學
    摘要: 視障者要獨立行走在一個陌生且複雜的室內公共空間中,是一件很困難的事情,因此,要如何獲取環境資訊以讓視障者可以在無需旁人輔助的情況下到達目的地,是個重要的研究課題。因此,本論文使用影像處理技術與深度學習做結合,開發出一套室內盲人引導輔助系統,讓視障者能在陌生的環境中獨立行走。
    本篇論文所開發出的室內引導盲人輔助系統包含三部分:(1)指標偵測;使用YOLOv3模型搭配深度影像,偵測出室內常見的指標以及地上危險指標,並且計算其距離位置、(2)文字偵測與辨識:使用PSENET模型搭配OCR文字辨識API,偵測與識別指示標示牌中的文字、和(3)方向與指示標示牌資訊配對:使用YOLOv3模型偵測出區域,以配對指示標示牌中的箭頭與資訊。綜合上述三部分之功能,在取得指示標示牌影像後,可以識別此標示牌之資訊,可以透過語音提示的方式來輔助視障者掌握前方陌生的環境資訊。
    本系統研發的目的在於輔助視障者在陌生環境公共空間中,提供眼前的資訊,以幫助視障者抵達目的地。實驗結果顯示,平均偵測到的指標準確率達到93%,方向與指示標誌牌資訊配對的精確度為86%,有此可以證明本系統具備一定程度之可用性。

    關鍵字:深度學習、導盲系統、室內引導、偵測系統、視障者
    ;It is difficult for the visually impaired to walk independently in a strange and complicated indoor public space. Therefore, how to obtain environmental information so that the visually impaired can reach the destination without the assistance of others is an important research topic. Therefore, this paper uses image processing technology combined with deep learning to develop an indoor blind navigation assistance system that allow the visually impaired to walk independently in unfamiliar environments.
    The indoor blind navigation assistance system developed in this paper consists of three parts: (1) Sign detection: using the YOLOv3 model with depth images to detect common indoor signs and ground hazard indicators, and calculate their distance and location, (2) Word detection and recognition: using PSENET model with the OCR text recognition API to detect and recognize the text in the indicator plate. (3) direction and indicator information pairing: Using YOLOv3 model to detect the area and match the direction indicating arrows and information. Combining the functions of the above three parts, after obtaining the image of the front signboards, the information of the environment can be identified, and the system can assist the visually impaired to grasp the unfamiliar environment information in front via voiced signals.
    The purpose of the development of this system is to assist visually impaired people to provide information in front of them in a public space in an unfamiliar environment to help the visually impaired reach their destination. The experimental results show that the average accuracy of the detected indicators reaches 93%, and the accuracy of direction and sign information matching is 86%. From the experimental results, it proves that the system has a certain degree of usability.

    Keywords: deep learning, navigation systems for the blind persons, indoor navigation, detection system, visually impaired
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML134檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明