中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95812
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41773912      線上人數 : 2055
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95812


    題名: 應用深度學習於視障人士生活輔助系統;A Deep Learning Approach to Living Assistance Systems for the Visually Impaired
    作者: 林威任;Lin, Wei-Jen
    貢獻者: 資訊工程學系
    關鍵詞: 導盲系統;室內引導;偵測系統;Blind guidance;indoor guidance;detection
    日期: 2024-08-14
    上傳時間: 2024-10-09 17:18:15 (UTC+8)
    出版者: 國立中央大學
    摘要: 根據世界衛生組織2019 年的統計全球約有22 億人有視覺障礙的問
    題,而在台灣視覺障礙者約有五萬六千餘人,對於視覺障礙者而言,要
    自主在陌生環境中移動是相當困難的,而傳統的輔助裝置像是白手杖、
    導盲犬等都有其不方便或是普及的困難之處,所以本論文想使用立體視
    覺的攝影機與深度學習的演算法協助視障者避開障礙物、偵測路牌,輔
    助他們在陌生的環境中行走。
    本論文包含:(1) 開發離線版的室內導盲輔助裝置、(2) 使用MobileNet
    偵測路面情況、(3) 使用YOLO、CRAFT、CRNN 三個模型解析路牌資
    訊,協助障者在室內的公共場所移動。
    路面偵測的實驗雖然DenseNet(94.58%) 效果優於MobileNet(93.53%),
    但是考量硬體裝置,使用參數量較少的MobileNet 更加適合。而使用
    YOLO 偵測路牌的實驗,當IOU>0.5 的mAP 為90.07%,已經能透過路
    牌偵測協助障者移動。;According to the World Health Organization's statistics in 2019, there are approximately 2.2 billion people worldwide with visual impairment issues, and there are about 56,000 visually impaired people in Taiwan. For visually impaired people, it is quite difficult to move independently in unfamiliar environments, and traditional aids such as white canes and guide dogs have their own difficulties or difficulties in popularization. Therefore, this paper proposes to use a stereoscopic camera and deep learning algorithm to assist visually impaired people in avoiding obstacles, detecting road signs, and assisting them in walking in unfamiliar environments.
    This paper includes: (1) developing an offline indoor navigation aid, (2) using MobileNet to detect road conditions, and (3) using three models, YOLO, CRAFT, and CRNN, to analyze road sign information and assist the visually impaired in moving around public indoor spaces.
    Although DenseNet (94.58%) performed better than MobileNet (93.53%) in road detection experiments, MobileNet with fewer parameters is more suitable considering hardware devices. In the experiment of using YOLO to detect road signs, when IOU>0.5, the mAP is 90.7%, which can already detect road signs to assist the visually impaired in moving around.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML29檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明