English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78852/78852 (100%)
造訪人次 : 37782178      線上人數 : 487
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86671


    題名: 基於深度學習 3D 人體姿態辨識之多人互動虛擬實境系統設計;The design of virtual reality system with multiple people interaction based on deep learning 3D pose estimation
    作者: 蘇信嘉;Su, Sin-Jia
    貢獻者: 資訊工程學系
    關鍵詞: 深度學習;3D姿態辨識;虛擬實境;人機互動;Deep Learning;3D Pose Estimation;Virtual Reality;Human-Computer Interaction
    日期: 2021-08-13
    上傳時間: 2021-12-07 13:06:16 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來虛擬實境(VR)的技術發展成熟,廣泛的應用在各個不同的領域上,透過不同的裝置以及感應器,讓使用者可以沉浸在虛擬空間當中,能夠和其它的虛擬物件或是其它使用者來進行互動。而在虛擬實境系統中的人機互動方式也是一個值得研究的議題,現今的互動方式大部分都是透過頭盔式裝置以及手把控制器來進行互動,雖然可以提供使用者非常好的沉浸式體驗,但可能對於部分使用者來說,需要穿戴這些裝置是很麻煩的,甚至是會有不舒服的感覺。

    為了改善這種人機互動方式,在本文中,我提出了一個方法,並且以客家文化博物館裡的虛擬實境遊戲-牽罟為例子,結合 OpenPose 人體姿態辨識深度學習模型、ZED 2 深度相機和 Unity 虛擬實境引擎來進行開發,試著讓使用者不需要穿戴任何裝置就可以和虛擬物件互動,且進行遊戲。透過 ZED 2 深度相機來捕捉多位使用者的影像,取得深度資訊,再將影像資料作為 OpenPose 模型的輸入,就可以獲取到多位使用者的 2D 關鍵點座標,接著在 Unity 的腳本當中取得這些 2D 關鍵點座標,搭配深度資訊來把 2D 的關鍵點座標轉換成 3D 的關鍵點座標,並且投影到 Unity 的虛擬空間當中。這個方法可以應用在有多位使用者且需要即時互動的虛擬實境系統,使用者們只需要透過肢體動作,就能和虛擬空間中的虛擬物件互動。
    ;In recent years, the technology of virtual reality (VR) has fully developed and is widely used in various fields. Through different devices and sensors, users can be immersed in the virtual space and be able to interact with other virtual objects or other users. The way of human-computer interaction in the virtual reality system is also a topic worthy of study. Today′s interaction methods are mostly through helmet-mounted devices and handle controllers to interact, although they can provide users with a very good immersive experience, it may be troublesome for some users to wear these devices, or even feel uncomfortable.

    In order to improve this way of human-computer interaction, in this paper, I propose a method, and take the virtual reality game in the Hakka Culture Museum as an example, combined with the OpenPose human pose estimation deep learning model, ZED 2 stereo camera and Unity virtual reality engine for development, trying to allow users to interact with virtual objects and play games without wearing any devices. Use the ZED 2 stereo camera to capture images of multiple users, obtain depth information, and use the image data as the input of the OpenPose model to obtain the 2D keypoint coordinates of multiple users, then receive these 2D keypoint coordinates in the Unity script, combined with depth information to convert 2D keypoint coordinates into 3D keypoint coordinates, and project them into Unity virtual space. This method can be applied to a virtual reality system that has multiple users and needs real-time interaction. Users only need to use their body movements to interact with virtual objects in the virtual space.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML63檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明