English  |  正體中文  |  简体中文  |  Items with full text/Total items : 70585/70585 (100%)
Visitors : 23123077      Online Users : 649
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version

    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/78674

    Title: 基於深度學習之手勢人機介面與定位加值服務( I );A Deep Learning-Based Gesture Interface for Value-Added Location Services( I )
    Authors: 范國清;施國琛;鄭旭詠;蘇柏齊
    Contributors: 國立中央大學資訊工程系
    Keywords: 醫療照護服務;智慧型建築;類神經網路;手勢;社交行為預測;凌空手寫畫圖;智慧眼鏡;文字與商標;深度學習;精確定位;虛擬旅遊;社會網路;home caring;smart building;neural network;gesture recognition;social behavior prediction;in-air handwriting;smart classes;text labels or store symbols;deep learning;precise location;virtual tourist;social network
    Date: 2018-12-19
    Issue Date: 2018-12-20 13:42:46 (UTC+8)
    Publisher: 科技部
    Abstract: 總計畫大綱 本整合型研究計劃旨在運用深度學習科技開發創新的ICT平台,提出為人們帶來更好生活品質的自動化服務機制,特別是年長者的醫療照護服務。研究計劃始於智慧型建築,於其中佈建各式感測器,並連結經特殊設計的運算平台,用以蒐集視覺、動作、聲音、生理訊號與電力訊號等。所蒐集的資料被使用於訓練各式類神經網路以利於包括畫面分析、街景標記、建築內外道路或路徑識別等。整合計劃包含五項計劃,分別是(一)視障人士的智慧夥伴、(二)基於深度學習之手勢人機介面與定位加值服務、(三)以人工智慧為核心之腦波人機介面開發、(四)基於深度智能之口語處理技術、(五)基於AI/建築資訊模型技術之智慧建築能源管家。五項計劃各自提出核心深度學習技術,並經由總計畫整合以擴大應用層面。我們期待此整合型計劃的研究產出能對國內外產學界有所貢獻,並在技術轉移、專利申請與高附加價值商品中發揮顯著成效。 子計畫大綱 本子計畫運用類神經網路技術建構互動式加值位置感知系統,此系統具備兩種介面,即定點式與穿戴式介面;定點式介面由螢幕與數個深度攝影機組成,深度攝影機捕被用來捉使用者手勢,依此啟動多項軟體服務。系統更建置3D手勢辨識模組來協助使用者以手勢或凌空畫圖的方式開啟與使用智慧服務,藉此開發全新的人機介面方法。使用智慧眼鏡的穿戴式裝置將與定點式設備連線,利用定點機器的運算能力發展新穎應用。例如將使用者外出畫面傳回分析以獲得其更精確的位置資訊,讓如罹患老年痴呆症患者的家屬掌握患者的確切行蹤等。穿戴式介面亦可用於虛擬旅遊代理人的新式應用中,年長使用者可在家中使用定點裝置,虛擬旅遊代理人則與定點裝置連線,讓使用者接收來自虛擬代理人配戴的智慧眼鏡所捕捉的影像,進而由虛擬旅遊代理人代為與店員互動,讓使用者不用實際出門也能遊覽或購買生活用品,這項功能可讓行動不易的長者不再被行動能力所束縛,能與世界任何角落的朋友輕鬆交流。另外,經由互動模式的使用,我們可以觀察系統使用者的社交行為,進而分析及預測各式社群活動。我們希望在四年內開發一套實用的系統,讓以上技術可在日後具備其他工業或商業應用,為台灣的產學界做出重大貢獻。 ;This proposed joint project looks at using Deep Learning technologies with state-of-the-art ICT platform and innovations, to facilitate automated services for bringing good life to humanity, especially home caring for aged people. The joint project starts with a smart building, deployed with different sensors connected to specially designed platforms, whether used by individual or secured in particular locations within a building, to gather visual information, motion data, biological signals, power signals, and audio signals. The collected information is used for training many different types of neural networks, which will be able to classify photo images, tags on street, roads outside building, indoor layout and furniture, etc. The joint project consists of 5 sub-projects. Among the 5 sub-projects proposed, a few key technologies will be developed and can be further integrated and used in other applications. We expect the outcome of this joint project will contribute to the local and overseas industry not only in academic publications, but to bringing practical impact from technical transfer. This project focuses on the investigation of different new neural network modules and promises to deliver an interactive system employing image understanding, gesture recognition, and social behavior prediction for value-added location-aware services. An in-air handwriting module will be integrated with a 3D gesture recognition system, which allows users to use freehand gestures and drawing to operate an intelligent interface for multiple services. Two types of interfaces, namely the stationary interface and the wearable interface will be developed. The first interface is a setup of a large screen, with multiple RGB-D cameras to receive 3D finger input. The second type of interface uses smart classes equipped with a camera. This wearable interface is connected to the stationary interface and a few cutting-edge applications will be realized. For example, on the wearable interface, street images such as text labels or store symbols can be recognized. While the user is walking toward a supermarket or a restaurant, the particular indoor scene and user’s location can be fed to a Deep Learning machine, which can compute precise information of the store where the user is located. This information helps an Alzheimer’s to communicate with his/her family, in order to receive proper help. The wearable interface can also be used by a Virtual Tourist Avatar. Imaging that an elder can use the stationary interface at home, with connection to the tourist avatar. The elder’s view is synchronized with the image captured by the camera on smart glasses. When the elder points to a particular store, information can be displayed and used by the tourist avatar who serves the elder by talking to store venders and purchasing goods. This virtual tourist service system can unleash the boundary of the elder, to travel virtually and to talk to any friends around the world. In addition, these user interactions may reveal his/her social connections to the world so behaviors of social groups can be analyzed and abnormal individual behaviors can be predicted.
    Relation: 財團法人國家實驗研究院科技政策研究與資訊中心
    Appears in Collections:[資訊工程學系] 研究計畫

    Files in This Item:

    File Description SizeFormat

    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明