dc.description.abstract | Due to the development of the COVID-19 pandemic and the evolution of our society, non-touch services have gained significant momentum, giving rise to the concept of the "Non-Touch Economy". One industry that has experienced rapid growth and is well-suited for non-touch services is digital signage. Skeleton-based action and gesture recognition method provide more direct and intuitive means of control, and the use of skeletal data helps protect privacy, making them ideal for the non-touch economy. However, existing solutions often have hardware limitations, domain-specific requirements, and involve excessive number of control movements, which increase the user′s learning curve and make adoption challenging.
This research proposed a multi-person action recognition framework that combines arm and gesture control, specifically designed for digital signage applications. By incorporating additional body joint information into the gesture control, enhancing the functionality while also expanding the differentiation from everyday actions and achieve a wider range of control functions with fewer gesture combinations. Furthermore, this research introduced a motion interval detection strategy in the framework to reduce false recognition between functional actions and everyday movements, thereby minimizing unnecessary computations. Additionally, considering the structural characteristics of the human body, the existing 3D convolutional neural network data input method was adjusted to the proposed recognition method and explore its performance. Another contribution of this study is the integration of publicly available gesture and human action datasets to simulate real movements.
To validate the effectiveness of the framework and synthesized dataset, this study recorded related actions multiple times with the same individual to ensure better alignment with the dataset. It also selected several well-known convolutional neural network models and transformed them into 3D models for evaluation purposes. | en_US |