中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/84117
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41641258      Online Users : 1424
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/84117


    Title: 基於深度學習之室內盲人引導輔助系統;A Deep-learning-based Indoor Navigation Assistance System for Visually Impaired Persons
    Authors: 王婷;Wang, Ting
    Contributors: 資訊工程學系
    Keywords: 深度學習;導盲系統;室內引導;偵測系統;視障者;deep learning;navigation systems for the blind persons;indoor navigation;detection system;visually impaired
    Date: 2020-08-18
    Issue Date: 2020-09-02 18:06:51 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 視障者要獨立行走在一個陌生且複雜的室內公共空間中,是一件很困難的事情,因此,要如何獲取環境資訊以讓視障者可以在無需旁人輔助的情況下到達目的地,是個重要的研究課題。因此,本論文使用影像處理技術與深度學習做結合,開發出一套室內盲人引導輔助系統,讓視障者能在陌生的環境中獨立行走。
    本篇論文所開發出的室內引導盲人輔助系統包含三部分:(1)指標偵測;使用YOLOv3模型搭配深度影像,偵測出室內常見的指標以及地上危險指標,並且計算其距離位置、(2)文字偵測與辨識:使用PSENET模型搭配OCR文字辨識API,偵測與識別指示標示牌中的文字、和(3)方向與指示標示牌資訊配對:使用YOLOv3模型偵測出區域,以配對指示標示牌中的箭頭與資訊。綜合上述三部分之功能,在取得指示標示牌影像後,可以識別此標示牌之資訊,可以透過語音提示的方式來輔助視障者掌握前方陌生的環境資訊。
    本系統研發的目的在於輔助視障者在陌生環境公共空間中,提供眼前的資訊,以幫助視障者抵達目的地。實驗結果顯示,平均偵測到的指標準確率達到93%,方向與指示標誌牌資訊配對的精確度為86%,有此可以證明本系統具備一定程度之可用性。

    關鍵字:深度學習、導盲系統、室內引導、偵測系統、視障者
    ;It is difficult for the visually impaired to walk independently in a strange and complicated indoor public space. Therefore, how to obtain environmental information so that the visually impaired can reach the destination without the assistance of others is an important research topic. Therefore, this paper uses image processing technology combined with deep learning to develop an indoor blind navigation assistance system that allow the visually impaired to walk independently in unfamiliar environments.
    The indoor blind navigation assistance system developed in this paper consists of three parts: (1) Sign detection: using the YOLOv3 model with depth images to detect common indoor signs and ground hazard indicators, and calculate their distance and location, (2) Word detection and recognition: using PSENET model with the OCR text recognition API to detect and recognize the text in the indicator plate. (3) direction and indicator information pairing: Using YOLOv3 model to detect the area and match the direction indicating arrows and information. Combining the functions of the above three parts, after obtaining the image of the front signboards, the information of the environment can be identified, and the system can assist the visually impaired to grasp the unfamiliar environment information in front via voiced signals.
    The purpose of the development of this system is to assist visually impaired people to provide information in front of them in a public space in an unfamiliar environment to help the visually impaired reach their destination. The experimental results show that the average accuracy of the detected indicators reaches 93%, and the accuracy of direction and sign information matching is 86%. From the experimental results, it proves that the system has a certain degree of usability.

    Keywords: deep learning, navigation systems for the blind persons, indoor navigation, detection system, visually impaired
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML134View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明