中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86671
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78852/78852 (100%)
Visitors : 37790178      Online Users : 2515
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/86671


    Title: 基於深度學習 3D 人體姿態辨識之多人互動虛擬實境系統設計;The design of virtual reality system with multiple people interaction based on deep learning 3D pose estimation
    Authors: 蘇信嘉;Su, Sin-Jia
    Contributors: 資訊工程學系
    Keywords: 深度學習;3D姿態辨識;虛擬實境;人機互動;Deep Learning;3D Pose Estimation;Virtual Reality;Human-Computer Interaction
    Date: 2021-08-13
    Issue Date: 2021-12-07 13:06:16 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 近年來虛擬實境(VR)的技術發展成熟,廣泛的應用在各個不同的領域上,透過不同的裝置以及感應器,讓使用者可以沉浸在虛擬空間當中,能夠和其它的虛擬物件或是其它使用者來進行互動。而在虛擬實境系統中的人機互動方式也是一個值得研究的議題,現今的互動方式大部分都是透過頭盔式裝置以及手把控制器來進行互動,雖然可以提供使用者非常好的沉浸式體驗,但可能對於部分使用者來說,需要穿戴這些裝置是很麻煩的,甚至是會有不舒服的感覺。

    為了改善這種人機互動方式,在本文中,我提出了一個方法,並且以客家文化博物館裡的虛擬實境遊戲-牽罟為例子,結合 OpenPose 人體姿態辨識深度學習模型、ZED 2 深度相機和 Unity 虛擬實境引擎來進行開發,試著讓使用者不需要穿戴任何裝置就可以和虛擬物件互動,且進行遊戲。透過 ZED 2 深度相機來捕捉多位使用者的影像,取得深度資訊,再將影像資料作為 OpenPose 模型的輸入,就可以獲取到多位使用者的 2D 關鍵點座標,接著在 Unity 的腳本當中取得這些 2D 關鍵點座標,搭配深度資訊來把 2D 的關鍵點座標轉換成 3D 的關鍵點座標,並且投影到 Unity 的虛擬空間當中。這個方法可以應用在有多位使用者且需要即時互動的虛擬實境系統,使用者們只需要透過肢體動作,就能和虛擬空間中的虛擬物件互動。
    ;In recent years, the technology of virtual reality (VR) has fully developed and is widely used in various fields. Through different devices and sensors, users can be immersed in the virtual space and be able to interact with other virtual objects or other users. The way of human-computer interaction in the virtual reality system is also a topic worthy of study. Today′s interaction methods are mostly through helmet-mounted devices and handle controllers to interact, although they can provide users with a very good immersive experience, it may be troublesome for some users to wear these devices, or even feel uncomfortable.

    In order to improve this way of human-computer interaction, in this paper, I propose a method, and take the virtual reality game in the Hakka Culture Museum as an example, combined with the OpenPose human pose estimation deep learning model, ZED 2 stereo camera and Unity virtual reality engine for development, trying to allow users to interact with virtual objects and play games without wearing any devices. Use the ZED 2 stereo camera to capture images of multiple users, obtain depth information, and use the image data as the input of the OpenPose model to obtain the 2D keypoint coordinates of multiple users, then receive these 2D keypoint coordinates in the Unity script, combined with depth information to convert 2D keypoint coordinates into 3D keypoint coordinates, and project them into Unity virtual space. This method can be applied to a virtual reality system that has multiple users and needs real-time interaction. Users only need to use their body movements to interact with virtual objects in the virtual space.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML64View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明