博碩士論文 108525001 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:21 、訪客IP:3.144.121.155
姓名 蘇信嘉(Sin-Jia Su)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於深度學習 3D 人體姿態辨識之多人互動虛擬實境系統設計
(The design of virtual reality system with multiple people interaction based on deep learning 3D pose estimation)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於分析關鍵動量相關性之動態紋理轉換★ 基於保護影像中直線結構的細縫裁減系統
★ 建基於開放式網路社群學習環境之社群推薦機制★ 英語作為外語的互動式情境學習環境之系統設計
★ 基於膚色保存之情感色彩轉換機制★ 一個用於虛擬鍵盤之手勢識別框架
★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發★ 使用慣性傳感器構建即時人體骨架動作
★ 基於多台攝影機即時三維建模★ 基於互補度與社群網路分析於基因演算法之分組機制
★ 即時手部追蹤之虛擬樂器演奏系統★ 基於類神經網路之即時虛擬樂器演奏系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來虛擬實境(VR)的技術發展成熟,廣泛的應用在各個不同的領域上,透過不同的裝置以及感應器,讓使用者可以沉浸在虛擬空間當中,能夠和其它的虛擬物件或是其它使用者來進行互動。而在虛擬實境系統中的人機互動方式也是一個值得研究的議題,現今的互動方式大部分都是透過頭盔式裝置以及手把控制器來進行互動,雖然可以提供使用者非常好的沉浸式體驗,但可能對於部分使用者來說,需要穿戴這些裝置是很麻煩的,甚至是會有不舒服的感覺。

為了改善這種人機互動方式,在本文中,我提出了一個方法,並且以客家文化博物館裡的虛擬實境遊戲-牽罟為例子,結合 OpenPose 人體姿態辨識深度學習模型、ZED 2 深度相機和 Unity 虛擬實境引擎來進行開發,試著讓使用者不需要穿戴任何裝置就可以和虛擬物件互動,且進行遊戲。透過 ZED 2 深度相機來捕捉多位使用者的影像,取得深度資訊,再將影像資料作為 OpenPose 模型的輸入,就可以獲取到多位使用者的 2D 關鍵點座標,接著在 Unity 的腳本當中取得這些 2D 關鍵點座標,搭配深度資訊來把 2D 的關鍵點座標轉換成 3D 的關鍵點座標,並且投影到 Unity 的虛擬空間當中。這個方法可以應用在有多位使用者且需要即時互動的虛擬實境系統,使用者們只需要透過肢體動作,就能和虛擬空間中的虛擬物件互動。
摘要(英) In recent years, the technology of virtual reality (VR) has fully developed and is widely used in various fields. Through different devices and sensors, users can be immersed in the virtual space and be able to interact with other virtual objects or other users. The way of human-computer interaction in the virtual reality system is also a topic worthy of study. Today′s interaction methods are mostly through helmet-mounted devices and handle controllers to interact, although they can provide users with a very good immersive experience, it may be troublesome for some users to wear these devices, or even feel uncomfortable.

In order to improve this way of human-computer interaction, in this paper, I propose a method, and take the virtual reality game in the Hakka Culture Museum as an example, combined with the OpenPose human pose estimation deep learning model, ZED 2 stereo camera and Unity virtual reality engine for development, trying to allow users to interact with virtual objects and play games without wearing any devices. Use the ZED 2 stereo camera to capture images of multiple users, obtain depth information, and use the image data as the input of the OpenPose model to obtain the 2D keypoint coordinates of multiple users, then receive these 2D keypoint coordinates in the Unity script, combined with depth information to convert 2D keypoint coordinates into 3D keypoint coordinates, and project them into Unity virtual space. This method can be applied to a virtual reality system that has multiple users and needs real-time interaction. Users only need to use their body movements to interact with virtual objects in the virtual space.
關鍵字(中) ★ 深度學習
★ 3D姿態辨識
★ 虛擬實境
★ 人機互動
關鍵字(英) ★ Deep Learning
★ 3D Pose Estimation
★ Virtual Reality
★ Human-Computer Interaction
論文目次 摘要 i
Abstract ii
Contents iii
List of Figures v
List of Tables vii
1. Introduction 1
1.1 Background 1
1.2 Motivation 2
1.3 Thesis Organization 4
2. Related Work 5
2.1 Interaction in Virtual Reality 5
2.1.1 Number of Users 7
2.1.2 Interactive Devices 9
2.2 Deep Learning 14
2.2.1 Convolutional Neural Network 16
2.2.2 VGG-19 20
2.3 Pose Estimation 21
2.3.1 2D Pose Estimation 21
2.3.2 3D Pose Estimation 23
3. Proposed Framework 26
3.1 Pose Estimation Model 26
3.2 Integration with Virtual Reality 30
3.3 The Design of The Game 33
4. Experiment 39
4.1 Environment Setup 39
4.1.1 Camera 39
4.1.2 Hardware 41
4.1.3 Software 43
4.2 System Evaluation 45
4.2.1 3D Keypoints in Virtual Space 45
4.2.2 Game Presentation 50
5. Conclusion and Future Works 55
6. Reference 56
參考文獻 [1] "Unity," [Online]. Available: https://unity.com. [Accessed 6 May 2021].
[2] "Unreal Engine," [Online]. Available: https://www.unrealengine.com/en-US. [Accessed 6 May 2021].
[3] "Taoyuan Tourist Guide - Yongan Fishing Port," [Online]. Available: https://travel.tycg.gov.tw/zh-tw/travel/attraction/314. [Accessed 6 May 2021].
[4] M. Němec, R. Fasuga, J. Trubač and J. Kratochvíl, "Using Virtual Reality in Education," in 2017 15th International Conference on Emerging eLearning Technologies and Applications (ICETA), Stary Smokovec, Slovakia, 2017.
[5] R. Kodama, M. Koge, S. Taguchi and H. Kajimoto, "COMS-VR: Mobile virtual reality entertainment system using electric car and head-mounted display," in 2017 IEEE Symposium on 3D User Interfaces (3DUI), Los Angeles, CA, USA, 2017.
[6] D. Herumurti, A. A. Yunanto, G. A. Senna, I. Kuswardayan and S. Arifiani, "Development of First-Person Shooter Game with Survival Maze Based on Virtual Reality," in 2020 6th Information Technology International Seminar (ITIS), Surabaya, Indonesia, 2020.
[7] J. Schild, S. Misztal, B. Roth, L. Flock, T. Luiz, D. Lerner, M. Herkersdorf, K. Wegner, M. Neuberger, A. Franke, C. Kemp, J. Pranghofer, S. Seele, H. Buhler and R. Herpers, "Applying Multi-User Virtual Reality to Collaborative Medical Training," in 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 2018.
[8] X. Liao, J. Niu, H. Wang and B. Du, "Research on Virtual Reality Simulation Training System of Substation," in 2017 International Conference on Virtual Reality and Visualization (ICVRV), Zhengzhou, China, 2017.
[9] H. Liu, Z. Bi, J. Dai, Y. Yu and Y. Shi, "UAV Simulation Flight Training System," in 2018 International Conference on Virtual Reality and Visualization (ICVRV), Qingdao, China, 2018.
[10] J. Schild, L. Flock, P. Martens, B. Roth, N. Schünemann, E. Heller and S. Misztal, "EPICSAVE Lifesaving Decisions – a Collaborative VR Training Game Sketch for Paramedics," in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019.
[11] "Kinect for Windows - Microsoft," [Online]. Available: https://developer.microsoft.com/zh-tw/windows/kinect/. [Accessed June 2021].
[12] "Kinect v2 basic introduction," [Online]. Available: http://pococater.blogspot.com/2014/10/ofkinect-v2-kinect-v2.html. [Accessed June 2021].
[13] T. C. Hoang, H. T. Dang and V. D. Nguyen, "Kinect-based virtual training system for rehabilitation," in 2017 International Conference on System Science and Engineering (ICSSE), Ho Chi Minh City, Vietnam, 2017.
[14] "Leap Motion Controller," [Online]. Available: https://www.ultraleap.com/product/leap-motion-controller/. [Accessed June 2021].
[15] A. Dzikri and D. E. Kurniawan, "Hand Gesture Recognition for Game 3D Object Using The Leap Motion Controller with Backpropagation Method," in 2018 International Conference on Applied Engineering (ICAE), Batam, Indonesia, 2018.
[16] "MYO gesture armband," [Online]. Available: https://web.archive.org/web/20130303113118/https://getmyo.com/. [Accessed June 2021].
[17] N. P. Brillantes, H. Kim, R. Feria, M. R. Solamo and L. L. Figueroa, "Evaluation of a 3D physics classroom with Myo gesture control armband and unity," in 2017 8th International Conference on Information, Intelligence, Systems & Applications (IISA), Larnaca, Cyprus, 2017.
[18] "HTC VIVE," [Online]. Available: https://www.vive.com/tw/. [Accessed June 2021].
[19] "Oculus Rift," [Online]. Available: https://www.oculus.com/. [Accessed June 2021].
[20] V. T. Nguyen, K. Jung and T. Dang, "VRescuer: A Virtual Reality Application for Disaster Response Training," in 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), San Diego, CA, USA, 2019.
[21] S. Shen, "Getting started with deep learning," [Online]. Available: https://syshen.medium.com/%E5%85%A5%E9%96%80%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92-2-d694cad7d1e5. [Accessed June 2021].
[22] "Deep learning: CNN," [Online]. Available: https://cinnamonaitaiwan.medium.com/%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92-cnn%E5%8E%9F%E7%90%86-keras%E5%AF%A6%E7%8F%BE432fd9ea4935. [Accessed June 2021].
[23] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in International Conference on Learning Representations (ICLR 2015), San Diego, CA, 2015.
[24] "CNN classic model application," [Online]. Available: https://ithelp.ithome.com.tw/articles/10192162. [Accessed June 2021].
[25] R. Go and Y. Aoki, "Flexible top-view human pose estimation for detection system via CNN," in 2016 IEEE 5th Global Conference on Consumer Electronics, Kyoto, Japan, 2016.
[26] Z. Cao, T. Simon, S.-E. Wei and Y. Sheikh, "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017.
[27] C.-H. Chen and D. Ramanan, "3D Human Pose Estimation = 2D Pose Estimation + Matching," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017.
[28] D. Tome, C. Russell and L. Agapito, "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017.
[29] "OpenPose - GitHub," [Online]. Available: https://github.com/CMU-PerceptualComputing-Lab/openpose. [Accessed March 2021].
[30] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei and Y. Sheikh, "OpenPose: Realtime MultiPerson 2D Pose Estimation Using Part Affinity Fields," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 172-186, 1 Jan 2021.
[31] "Obi Rope," [Online]. Available: https://assetstore.unity.com/packages/tools/physics/obi-rope-55579. [Accessed March 2021].
[32] "First Person Hand - VR Compatible," [Online]. Available: https://assetstore.unity.com/packages/3d/characters/humanoids/humans/first-personhand-vr-compatible-134028. [Accessed March 2021].
[33] "PBR Plastic materials," [Online]. Available: https://assetstore.unity.com/packages/2d/textures-materials/pbr-plastic-materials-107717. [Accessed May 2021].
[34] "Meadow Environment - Dynamic Nature," [Online]. Available: https://assetstore.unity.com/packages/3d/vegetation/meadow-environment-dynamicnature-132195. [Accessed November 2020].
[35] "Forest Environment - Dynamic Nature," [Online]. Available: https://assetstore.unity.com/packages/3d/vegetation/forest-environment-dynamicnature-150668. [Accessed November 2020].
[36] "R.A.M 2019 - River Auto Material 2019," [Online]. Available: https://assetstore.unity.com/packages/tools/terrain/r-a-m-2019-river-auto-material2019-145937. [Accessed November 2020].
[37] "Freshwater fish complete pack," [Online]. Available: https://assetstore.unity.com/packages/3d/characters/animals/fish/freshwater-fishcomplete-pack-77800. [Accessed November 2020].
[38] "Stereolabs - Github," [Online]. Available: https://github.com/stereolabs. [Accessed November 2020].
[39] "ZED with OpenPose - GitHub," [Online]. Available: https://github.com/stereolabs/zed-openpose. [Accessed November 2020].
[40] "ZED 2 stereo camera," [Online]. Available: https://www.stereolabs.com/zed-2/. [Accessed June 2021]
指導教授 施國琛(Timothy K. Shih) 審核日期 2021-8-13
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明