DC 欄位 |
值 |
語言 |
DC.contributor | 資訊工程學系 | zh_TW |
DC.creator | 蘇信嘉 | zh_TW |
DC.creator | Sin-Jia Su | en_US |
dc.date.accessioned | 2021-8-13T07:39:07Z | |
dc.date.available | 2021-8-13T07:39:07Z | |
dc.date.issued | 2021 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=108525001 | |
dc.contributor.department | 資訊工程學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 近年來虛擬實境(VR)的技術發展成熟,廣泛的應用在各個不同的領域上,透過不同的裝置以及感應器,讓使用者可以沉浸在虛擬空間當中,能夠和其它的虛擬物件或是其它使用者來進行互動。而在虛擬實境系統中的人機互動方式也是一個值得研究的議題,現今的互動方式大部分都是透過頭盔式裝置以及手把控制器來進行互動,雖然可以提供使用者非常好的沉浸式體驗,但可能對於部分使用者來說,需要穿戴這些裝置是很麻煩的,甚至是會有不舒服的感覺。
為了改善這種人機互動方式,在本文中,我提出了一個方法,並且以客家文化博物館裡的虛擬實境遊戲-牽罟為例子,結合 OpenPose 人體姿態辨識深度學習模型、ZED 2 深度相機和 Unity 虛擬實境引擎來進行開發,試著讓使用者不需要穿戴任何裝置就可以和虛擬物件互動,且進行遊戲。透過 ZED 2 深度相機來捕捉多位使用者的影像,取得深度資訊,再將影像資料作為 OpenPose 模型的輸入,就可以獲取到多位使用者的 2D 關鍵點座標,接著在 Unity 的腳本當中取得這些 2D 關鍵點座標,搭配深度資訊來把 2D 的關鍵點座標轉換成 3D 的關鍵點座標,並且投影到 Unity 的虛擬空間當中。這個方法可以應用在有多位使用者且需要即時互動的虛擬實境系統,使用者們只需要透過肢體動作,就能和虛擬空間中的虛擬物件互動。 | zh_TW |
dc.description.abstract | In recent years, the technology of virtual reality (VR) has fully developed and is widely used in various fields. Through different devices and sensors, users can be immersed in the virtual space and be able to interact with other virtual objects or other users. The way of human-computer interaction in the virtual reality system is also a topic worthy of study. Today′s interaction methods are mostly through helmet-mounted devices and handle controllers to interact, although they can provide users with a very good immersive experience, it may be troublesome for some users to wear these devices, or even feel uncomfortable.
In order to improve this way of human-computer interaction, in this paper, I propose a method, and take the virtual reality game in the Hakka Culture Museum as an example, combined with the OpenPose human pose estimation deep learning model, ZED 2 stereo camera and Unity virtual reality engine for development, trying to allow users to interact with virtual objects and play games without wearing any devices. Use the ZED 2 stereo camera to capture images of multiple users, obtain depth information, and use the image data as the input of the OpenPose model to obtain the 2D keypoint coordinates of multiple users, then receive these 2D keypoint coordinates in the Unity script, combined with depth information to convert 2D keypoint coordinates into 3D keypoint coordinates, and project them into Unity virtual space. This method can be applied to a virtual reality system that has multiple users and needs real-time interaction. Users only need to use their body movements to interact with virtual objects in the virtual space. | en_US |
DC.subject | 深度學習 | zh_TW |
DC.subject | 3D姿態辨識 | zh_TW |
DC.subject | 虛擬實境 | zh_TW |
DC.subject | 人機互動 | zh_TW |
DC.subject | Deep Learning | en_US |
DC.subject | 3D Pose Estimation | en_US |
DC.subject | Virtual Reality | en_US |
DC.subject | Human-Computer Interaction | en_US |
DC.title | 基於深度學習 3D 人體姿態辨識之多人互動虛擬實境系統設計 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | The design of virtual reality system with multiple people interaction based on deep learning 3D pose estimation | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |