中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/90114
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42414930      線上人數 : 1522
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/90114


    題名: 以手機輔助視障人士之室內物件偵測及拿取
    作者: 陳蓁晏;Chen, Zhen-Yan
    貢獻者: 電機工程學系
    關鍵詞: 視障者輔具;行動應用程式;語音導航;即時性運算;深度學習;物件偵測;半自動標註;輕量化;Visually impaired people;Mobile Application;Object detection;Cell phone;Neural networks
    日期: 2022-09-15
    上傳時間: 2022-10-04 12:11:27 (UTC+8)
    出版者: 國立中央大學
    摘要: 我們在家或辦公室常常會有拿某件物品的需要,若是視障人士獨自在一個不熟悉的室內環境中,要偵測並拿取物品非常困難。本論文就是要協助視障人士解決這個問題。我們開發一種手機上的導盲輔助行動應用程式(Mobile Application, APP),視障者操作此APP,可以選擇要拿取的物品,並偵測該物品所在位置及距離,利用聲音及震動回饋引導視障者至該物品附近並拿取。
      本論文的研究分為三大部分,第一部分為物件偵測模型(Object detection)的訓練,被偵測物件為室內環境常見的物品,取自於多種公開資料集,並且設計出一套自動過濾標註的系統,免除大量人力來執行標註。第二部分旨在修改物件偵測模型架構,使模型可以被手機晶片運算,並使用模型輕量化 (Neural Network Quantization)達成多種輸入影像尺寸(image-size)、運算精度(precision)、模型架構的輕量化轉換後,上述各種組合的模型架構之推論FPS (Frames Per Second)平均可快三倍。第三部分為設計一個手機上的APP,藉由手機內部的感測器來幫助視障人士引導方向跟姿態。並具有自動校正的演算法以適用於任一支手機,無須再手動進行參數校正,並設計符合視障人士習慣的操作介面,以提高使用便利性。本系統使用情境如下,當視障人士在APP選擇指定想拿取的物品後,透過移動手機,使鏡頭看到指定物件後,APP以中文語音及提示音的方式,引導視障人士走向該物品的方向,至足夠靠近物品後拿取物品。有別於過去研究需穿戴某些輔具,本系統僅需手機APP即可運行,免除繁複穿戴過程,祈使視障人士能以最輕便方式完成日常取物之需。
    ;We often need to reach for certain objects at home or in the office, but it is a difficult task for the visually impaired, especially in an unfamiliar indoor environment. This thesis focuses on assisting the visually impaired in reaching for the object in the indoor environment. We develop a Mobile Application (App) on a cell phone such that the visually impaired people can operate it to select the objects to be taken. The cell phone with the App can detect the location and distance of the objects, and guide the visually impaired person to take the objects by using sound and vibration feedback from the cell phone.
    The first part of this study is the training of the object detection model. The detected objects are for daily use in the indoor environment and are adopted from various open datasets. A system is designed to automatically filter the labels so that a large amount of human effort is saved to perform the labeling. The second part aims to modify the object detection model architecture so that the model can be computed by the cell phone chip, and use Neural Network Quantization to achieve a variety of input image-size, computational precision, and lightweight conversion of the model architecture. Thus, the FPS of inference of various model architectures can be three times faster on average. The third part is to design the APP on cell phones to guide the visually impaired in correct direction and posture by using the sensors inside the cell phone. We also design an automatic correction algorithm for applying to any cell phone without manual parameter correction, and establish the operation interface to meet the habits of the visually impaired to improve the convenience of use. The scenarios for the study are as follows. The visually impaired person selects the specified object in the APP and lets the cell phone search the selected object. Then the APP guides him/he in the right direction with Chinese voice and prompt tone to get close enough to the object and take it. Unlike previous studies that required the wearing of computing devices, this system only requires a cell phone with the APP so that the visually impaired can complete their daily needs most conveniently.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML65檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明