中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95812
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41143622      Online Users : 130
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/95812


    Title: 應用深度學習於視障人士生活輔助系統;A Deep Learning Approach to Living Assistance Systems for the Visually Impaired
    Authors: 林威任;Lin, Wei-Jen
    Contributors: 資訊工程學系
    Keywords: 導盲系統;室內引導;偵測系統;Blind guidance;indoor guidance;detection
    Date: 2024-08-14
    Issue Date: 2024-10-09 17:18:15 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 根據世界衛生組織2019 年的統計全球約有22 億人有視覺障礙的問
    題,而在台灣視覺障礙者約有五萬六千餘人,對於視覺障礙者而言,要
    自主在陌生環境中移動是相當困難的,而傳統的輔助裝置像是白手杖、
    導盲犬等都有其不方便或是普及的困難之處,所以本論文想使用立體視
    覺的攝影機與深度學習的演算法協助視障者避開障礙物、偵測路牌,輔
    助他們在陌生的環境中行走。
    本論文包含:(1) 開發離線版的室內導盲輔助裝置、(2) 使用MobileNet
    偵測路面情況、(3) 使用YOLO、CRAFT、CRNN 三個模型解析路牌資
    訊,協助障者在室內的公共場所移動。
    路面偵測的實驗雖然DenseNet(94.58%) 效果優於MobileNet(93.53%),
    但是考量硬體裝置,使用參數量較少的MobileNet 更加適合。而使用
    YOLO 偵測路牌的實驗,當IOU>0.5 的mAP 為90.07%,已經能透過路
    牌偵測協助障者移動。;According to the World Health Organization's statistics in 2019, there are approximately 2.2 billion people worldwide with visual impairment issues, and there are about 56,000 visually impaired people in Taiwan. For visually impaired people, it is quite difficult to move independently in unfamiliar environments, and traditional aids such as white canes and guide dogs have their own difficulties or difficulties in popularization. Therefore, this paper proposes to use a stereoscopic camera and deep learning algorithm to assist visually impaired people in avoiding obstacles, detecting road signs, and assisting them in walking in unfamiliar environments.
    This paper includes: (1) developing an offline indoor navigation aid, (2) using MobileNet to detect road conditions, and (3) using three models, YOLO, CRAFT, and CRNN, to analyze road sign information and assist the visually impaired in moving around public indoor spaces.
    Although DenseNet (94.58%) performed better than MobileNet (93.53%) in road detection experiments, MobileNet with fewer parameters is more suitable considering hardware devices. In the experiment of using YOLO to detect road signs, when IOU>0.5, the mAP is 90.7%, which can already detect road signs to assist the visually impaired in moving around.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML26View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明