中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93144
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41738813      Online Users : 1021
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/93144


    Title: Non-Touch Cooperation: An Interactive Mechanism Design Based on Mid-Air Gestures
    Authors: 王昭元;Wang, Chao-Yuan
    Contributors: 資訊管理學系
    Keywords: 半空手勢;骨骼;動作辨識;數位看板;非接觸經濟;卷積神經網路;mid-air gesture;skeleton;action recognition;digital signage;non-touch economy;convolutional neural network
    Date: 2023-07-11
    Issue Date: 2024-09-19 16:44:29 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 由於疫情的發展和時代的演進,非接觸的服務更具規模,形成了非接觸「經濟」,其中數位看板便是快速發展且適合非接觸服務的產業。相關基於骨骼的人體、手勢的操控方法,能提供更加直接的操控,而且骨骼資料可以保護隱私,能在非接觸經濟中有更良好的發展。然而目前的大多數的解決方案都存有硬體限制和領域要求,並且存在操控動作過多的問題,加大了使用者的使用門檻。
      本研究結合多個開源套件,提出了可以應用在數位看板中,結合手臂和手勢的多人動作識別框架,透過對操控動作增加人體關節數量增加人體資訊,達到功能性動作的同時,也擴大與日常動作的差異性,更能以較少的動作組合達到更多的操控功能。接著在框架中提出了一種動作區間檢測策略再次降低功能性動作與日常動作的誤判,減少不必要的計算量。此外,針對人體的結構性,本篇研究調整了現有人體3維卷積神經網路的資料輸入方法至提出的結合辨識方法上,探討相關的效能。本研究另一個貢獻是將公開的手勢和人體動作資料集做結合,從而模擬真實動作。
      最後為了驗證框架和合成資料集的效果,對於同一人多次進行錄製相關動作使動作更加符合資料集,並挑選一些經典的卷積神經網路模型並轉換成3D 模型後評估其效果。;Due to the development of the COVID-19 pandemic and the evolution of our society, non-touch services have gained significant momentum, giving rise to the concept of the "Non-Touch Economy". One industry that has experienced rapid growth and is well-suited for non-touch services is digital signage. Skeleton-based action and gesture recognition method provide more direct and intuitive means of control, and the use of skeletal data helps protect privacy, making them ideal for the non-touch economy. However, existing solutions often have hardware limitations, domain-specific requirements, and involve excessive number of control movements, which increase the user′s learning curve and make adoption challenging.
      This research proposed a multi-person action recognition framework that combines arm and gesture control, specifically designed for digital signage applications. By incorporating additional body joint information into the gesture control, enhancing the functionality while also expanding the differentiation from everyday actions and achieve a wider range of control functions with fewer gesture combinations. Furthermore, this research introduced a motion interval detection strategy in the framework to reduce false recognition between functional actions and everyday movements, thereby minimizing unnecessary computations. Additionally, considering the structural characteristics of the human body, the existing 3D convolutional neural network data input method was adjusted to the proposed recognition method and explore its performance. Another contribution of this study is the integration of publicly available gesture and human action datasets to simulate real movements.
      To validate the effectiveness of the framework and synthesized dataset, this study recorded related actions multiple times with the same individual to ensure better alignment with the dataset. It also selected several well-known convolutional neural network models and transformed them into 3D models for evaluation purposes.
    Appears in Collections:[Graduate Institute of Information Management] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML14View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明