English  |  正體中文  |  简体中文  |  Items with full text/Total items : 70585/70585 (100%)
Visitors : 23028314      Online Users : 577
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version

    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/78768

    Title: 視障人士的智慧夥伴( II ~ V )( I );The Intelligent Parter of the Blind( I )
    Authors: 王文俊;謝易錚;陳翔傑;陳慶瀚;施國琛;蔡宗漢;林智揚;蘇木春
    Contributors: 國立中央大學電機工程學系
    Keywords: 深度學習;機器人;穿戴式裝置;視障輔具;人工智慧;機器聽覺;街景推理;Deep learning;Robots;Wearable device;Street view reasoning;Artificial intelligence;Visually impaired supporting system;Machine hearing e
    Date: 2018-12-19
    Issue Date: 2018-12-20 13:47:44 (UTC+8)
    Publisher: 科技部
    Abstract: 本計畫所要解決的問題,是幫助視障者的日常生活,包括戶外行動的安全與便利,室內生活瑣事處理與物件辨識。以深度學習技術,設計製作行動與生活輔具,大幅增加視障者的行動、與辨識物件能力,尤其值得強調的ˮ戶外行動能力”。全台灣視障人口有18 萬餘人,「行動自主」是視障者重建尊嚴與接近人群最重要的關鍵。過去多數的視障者在室內常依靠白手杖,在室外則仰賴導盲犬或助理人員,導盲犬成本高昂,又無法用語言溝通。助理人員則是親人或是聘請外人,也是耗費人力與金錢。因此製作一個能與人用言語溝通,辨識障礙物,在室外能導引盲人行走,在室內能辨識物品,避開障礙物的導盲輔助系統就變成非常重要了。本計畫就是希望能完成一件盲人所需要的能溝通之行動輔助系統,讓視障者能自主行動,勇敢出門。因此,本子計畫以人工智慧,深度學習演算技術為主軸,研發設計應用於室內與室外的視障者生活輔助系統。 本計畫由四個子項目組成,子項目一負責自走式導盲機器人及穿戴式裝置的設計製作, GPS導航、安全行走的功能開發。子項目二開發辨識街道標誌與街道事件的辨識深度網路,讓視障者能利用路牌、招牌等街景辨識了解附近的環境;事件辨識讓視障者得知附近的路況。為了使視障者易於與導盲機器人或穿戴式裝置溝通,子項目三將研製一個深度學習機器聽覺(DRA)系統,提供導盲機器人或穿戴式裝置在複雜聲響環境中,從事遠距聲控、及盲人導航對話詢答等語音人機互動功能,並進行低功耗、硬體資源最佳化的DRA晶片設計,以便應用在行動和穿戴式系統。室內環境對視障者而言,可分為熟悉的居家或辦公室環境與陌生的其他環境,如醫院、機關、商店等。子項目四將為視障人士開發出一套「智慧型室內生活輔助系統」。此系統可以提供兩大類的環境資訊:(1)居家環境和(2)其他室內環境。當視障者處於居家環境時,他們可以找到移動過的家具(如椅子、桌子等)與物品(如:鑰匙、遙控器等)。當視障者處於其他室內環境時(如:辦公室、商場等),他們會知道該建築物的電梯或樓梯在哪?前方是否有警示物標示(如:地板濕滑、施工中標示等)。有了上述的環境資訊,視障朋友就可以較有信心地在室內行走,可以自主購物,與人社交。以上四個子項目開發的導盲機器人成功後,本計畫將把各系統模組製作成晶片,縮小體積,使導盲機器人變成一個穿戴式裝置,加上已經開發完成的"智慧型室內生活輔助系統"真的可以協助視障者方便生活,行動自主。 ;The goal of this project are to deliver an integrated system, to facilitate visually impaired people in daily activities. The system will ensure both safe outdoor activities and convenient indoor actions. Using Deep Learning technologies, our team will design and implement navigation devices, whether they are wearable or carried via a robot, to help visually impaired persons to travel, to recognize or find objects. There are about 180,000 visually impaired people in Taiwan. Their living areas and communication/social are quite limited. Self-navigation/traveling is a key factor for their dignity as well as their joining local communities. In the past, visually impaired people used to use white cane as an extension of his/her hand. In outdoor, guide dogs are commonly used. However, the cost of the guide dogs is very expensive. Mostly, blind dogs may not fully understand the requirements from a blind. If the blind relies on assistants or his/her relatives, the cost could be very high too. Therefore, to design a system which can verbally communicate with the blind and can guide the blind in outdoor and indoor navigation by avoiding obstacles became very important. The project will develop a complete intelligent system including hardware and firmware, as well as the software platform. The project consists of four sub tasks. Task one is in charge of the design and development of an automatic robot and the wearable device including the functions of GPS guide and safety on the walking. Task two use Deep Learning networks to help the blind to recognize street labels, street scenes, and street events. By helping the blind to know where he/she is located, and the surrounding information. Understanding street events can further help the blind to react faster and easily. In order to allow the blind to communicate with the robot guider, task three aims to develop a robot hearing system. The system is able to screen noises and allow the blind to use verbal commands to control the robot remotely. The system can further allow open dialogue between the blind and the robot. We further investigates low power consumption designs, and aim to deliver an optimized DRA chip, to enable the development of wearable devices. Indoor environment for blind can include living space, office space, and other non-familiar space (e.g., hospital, government office, shops, etc.). The fourth task will develop an intelligent indoor supporting system to help blinds. The system aims to provide two types of information for living space and the other type of spaces, respectively. While the blind is in living space, he/she will know whether furniture or objects were moved. While the blind is in other type of spaces, they will know the locations of stairs, escalators, or elevators. Some warning signs can be recognized. After the above four tasks are completed and the related systems are developed, the project will implement algorithms on chips to reduce the scale of hardware component. As a result, the components can be small enough and can be placed in a wearable device.
    Relation: 財團法人國家實驗研究院科技政策研究與資訊中心
    Appears in Collections:[電機工程學系] 研究計畫

    Files in This Item:

    File Description SizeFormat

    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明