博碩士論文 109522009 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:96 、訪客IP:3.136.236.126
姓名 蕭盛澤(Sheng-Tse Hsiao)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 元宇宙中的多人虛擬迷你高爾夫
(Multiplayer Virtual Mini Golf in Metaverse)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於分析關鍵動量相關性之動態紋理轉換★ 基於保護影像中直線結構的細縫裁減系統
★ 建基於開放式網路社群學習環境之社群推薦機制★ 英語作為外語的互動式情境學習環境之系統設計
★ 基於膚色保存之情感色彩轉換機制★ 一個用於虛擬鍵盤之手勢識別框架
★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發★ 使用慣性傳感器構建即時人體骨架動作
★ 基於多台攝影機即時三維建模★ 基於互補度與社群網路分析於基因演算法之分組機制
★ 即時手部追蹤之虛擬樂器演奏系統★ 基於類神經網路之即時虛擬樂器演奏系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 最近因為新冠肺炎的出現,被隔離的人被孤立在不同的室內空間裡。世界社會正處於艱難的時期。為了能夠讓在室內的人與其他人擁有更多的互動,我們提出一個利用深度學習(AI)和影像處理技術來發展建立的一個可以讓在不同室內空間的人,可以正常地溝通互動玩遊戲的系統。其中處理的問題包括骨架資料的管理、虛擬物件的互動、在虛擬實境(VR)或是擴增實境(AR)中的互動控制、以及3D虛擬空間的建構。同時,我們專注於人與環境的互動,並且建構一個專屬迷你高爾夫球場空間的3D模型,讓多個使用者可以在此3D空間中生成並且互相互動。在3D迷你高爾夫球場的玩家,在單一RGB相機的限定視角下,重建出的虛擬人物會與使用者映照出相同的上半身姿勢,而此互動將以MediaPipe骨架辨識為核心進行。
除此之外,我們的互動模型還會考慮手部骨架和物件以及環境的關係,例如虛擬人物與高爾夫球球桿的互動,亦或是球桿與高爾夫球的互動,對於這些需要高精度互動功能,我們結合了模組功能中的手勢偵測和人體骨架偵測,追蹤手部骨架和脊椎骨架的節點並搭配IK (Inverse kinematics) 等演算法,將其投射到虛擬環境中,讓玩家無須控制器,只需簡單的手勢、或是一些直覺性的動作,就可以操控角色在元宇宙中活動,並以Mirror連線模組為輔助,建構出完整的高爾夫球世界,擁有更多的玩家對玩家、玩家對虛擬物品的互動,擴充元宇宙裡的社交性。
摘要(英) Recently, due to the COVID-19, quarantined people are isolated in different indoor spaces, the world is going through a difficult time. In order to allow people living indoors to have more interaction with other people, we propose a system that uses deep learning and image processing technology to develop and establish a system that allows people in different indoor spaces to communicate and play games normally. Problems solved include management of skeletal data, interaction of virtual objects, interactive control in virtual reality (VR) or augmented reality (AR), and Construction of 3D Virtual Space. At the same time, we focus on the interaction between people and the environment, and construct a 3D model of an exclusive mini golf course space, so that multiple users can generate and interact with each other in this 3D space. In a 3D mini golf course, under the limited viewing angle of a single RGB camera, the reconstructed avatar will reflect the same upper body posture as the user, and different 3D avatars can interact in the AR/VR space, and these interactions will be based on MediaPipe skeleton recognition as the core.
In addition to this, our interaction model also considers the relationship between the hand skeleton and objects and the environment, such as the interaction of an avatar with a golf club, or the interaction of a golf club with a golf ball. For these high-precision interactive functions, we combine gesture detection and human skeleton detection in the module function, track the nodes of the hand skeleton and the spine skeleton, and use algorithms such as IK (Inverse kinematics) to project them to the virtual space. Let the player do not need a controller, just simple gestures, or some intuitive movements, you can control the character to move in the metaverse. Using Mirror connection module as the support, to construct a complete mini golf world, and at the same time, there are more player-to-player and player-to-virtual item interactions, expanding the sociality in the metaverse.
關鍵字(中) ★ 元宇宙
★ 迷你高爾夫
★ 多人連線
★ 骨架偵測
關鍵字(英) ★ Metaverse
★ Mini Golf
★ Multiplayer
★ Skeleton Detection
★ MediaPipe
論文目次 1 Introduction.............................................1
1.1 Deep learning......................................1
1.2 Metaverse..........................................1
1.3 MediaPipe..........................................3
1.4 Inverse kinematics.................................3
1.5 Virtual Mini Golf..................................3
2 Related Work.............................................5
3 Primary Research........................................13
3.1 Unity.............................................13
3.2 Space.............................................13
3.3 Club..............................................14
3.4 Avatar............................................15
3.5 Mirror............................................15
4 Methodology.............................................17
4.1 Preface...........................................17
4.2 Data Input and Preprocessing......................17
4.2.1 Depth-fix and vectorization...............18
4.2.2 Spine rotation............................19
4.2.3 Walking detect............................21
4.3 Avatar Control and Game Mechanics.................21
4.3.1 Avatar Control............................21
4.3.2 Mechanism.................................24
4.4 Multiplayer Features..............................28
4.4.1 Player and respawn point design...........28
4.4.2 Code management and permission detection..29
4.4.3 Game variants with more than three players34
5 Experiments.............................................39
5.1 Depth-fix.........................................39
5.2 Sign stabilization................................43
5.3 Spine Z-axis rotation.............................45
6 Conclusion..............................................48
7 Reference...............................................50
參考文獻 [1] Shreyas Hampali, Mahdi Rad, Markus Oberweger, and Vincent Lepetit. HOnnotate: A method for 3D Annotation of Hand and Object Poses. In arXiv preprint:1907.01481v6 2020.
[2] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin. Attention Is All You Need. In arXiv preprint:1706.03762v5 2017.
[3] Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun. Point Transformer. In ICCV 2021.
[4] Mingxing Tan and Quoc V. Le. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In arXiv preprint:1905.11946v5 2020.
[5] Ukrit Marung , Nipon Theera-Umpon and Sansanee Auephanwiriyakul. Top-N Recommender Systems Using Genetic Algorithm-Based Visual-Clustering Methods. In MDPI Symmetry 2016.
[6] Kian Ming Lim, Alan Wee Chiat Tan, Chin Poo Lee, Shing Chiang Tan. Isolated sign language recognition using Convolutional Neural Network hand modelling and Hand Energy Image. In Springer Science+Business Media, LLC, part of Springer Nature 2019.
[7] Rasha Amer Kadhim and Muntadher Khamees. A Real-Time American Sign Language Recognition System using Convolutional Neural Network for Real Datasets. In TEM Journal. Volume 9, Issue 3, Pages 937-943, 2020.
[8] Haikel Alhichri, Asma S. Alswayed, Yakoub Bazi, Nassim Ammour and Naif A. Alajlan. Classification of Remote Sensing Images Using EfficientNet-B3 CNN Model With Attention. In IEEE 2020.
[9] Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao. YOLOv4: Optimal Speed and Accuracy of Object Detection. In arXiv preprint:2004.10934v1 2020.
[10] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo and Ling Shao. PVT v2: Improved baselines with Pyramid Vision Transformer. In Computational Visual Media volume 8, pages415–424 (2022).
[11] Haoqiang Fan, Hao Su, Leonidas Guibas. A Point Set Generation Network for 3D Object Reconstruction from a Single Image. In CVPR 2017.
[12] Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng. Learning to detect human-object interactions. In WACV 2018.
[13] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3d hands, face, and body from a single image. In CVPR, 2019.
[14] Okan Kopuklu, Neslihan Kose, Gerhard Rigoll. Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition. In arXiv preprint:1804.07187v2 2018.
[15] Shu-Yu Chen, Wanchao Su, Lin Gao, Shihong Xia, Hongbo Fu. DeepFaceDrawing: Deep Generation of Face Images from Sketches. In ACM 2020.
[16] Guangming Zhu, Liang Zhang, Lin Mei, Jie Shao, Juan Song, Peiyi Shen. Large-scale Isolated Gesture Recognition using Pyramidal 3D Convolutional Networks. In ICPR 2016.
[17] Sijie Yan, Yuanjun Xiong, Dahua Lin. Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition. In AAAI, 2018.
[18] Lei Shi, Yifan Zhang, Jian Cheng, Hanqing Lu. Skeleton-Based Action Recognition With Directed Graph Neural Networks. In CVPR, 2019.
[19] A. D′Souza, S. Vijayakumar, S. Schaal. Learning Inverse Kinematics. In IEEE, 2001.
[20] Hailin Ren, Pinhas Ben-Tzvi. Learning inverse kinematics and dynamics of a robotic manipulator using generative adversarial networks. ELSEVIER-Robotics and Autonomous Systems, 2020.
[21] Keith Grochow, Steven L. Martin, Aaron Hertzmann, Z. Popovic. Style-based inverse kinematics. ACM Digital Library, 2004.
[22] Wisarut Bholsithi, Nonlapas Wongwaen, Chanjira Sinthanayothin. 3D avatar developments in real time and accuracy assessments. In ICSEC, 2014.
[23] John David N. Dionisio, William G. Burns III, Richard Gilbert. 3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities. In ACM Computing Surveys Volume 45 Issue 3June 2013.
[24] Alanah Davis, John D. Murphy, Dawn Owens, Deepak Khazanchi, Ilze Zigurs. Avatars, People, and Virtual Worlds: Foundations for Research in Metaverses. In Journal of the Association for Information Systems 2009.
[25] Bektur Ryskeldiev, Yoichi Ochiai, Michael Cohen, Jens Herder. Distributed Metaverse: Creating Decentralized Blockchain-based Model for Peer-to-peer Sharing of Virtual Spaces for Mixed Reality Applications. In Proceedings of the 9th Augmented Human International Conference February 2018 Article.
[26] Masaki Oshita. Motion-Capture-Based Avatar Control Framework in Third-Person View Virtual Environments. In ACM SIGCHI International Conference on Advances in Computer Entertainment Technology 2006.
[27] Hee-soo Choi, Sang-heon Kim. A content service deployment plan for metaverse museum exhibitions—Centering on the combination of beacons and HMDs. In International Journal of Information Managemen 2017.
[28] Aziz Siyaev and Geun-Sik Jo. Neuro-Symbolic Speech Understanding in Aircraft Maintenance Metaverse. In IEEE 2021.
[29] Fan Zhang, Valentin Bazarevsky, Andrey Vakunov, Andrei Tkachenka, George Sung, Chuo-Ling Chang, Matthias Grundmann. MediaPipe Hands: On-device Real-time Hand Tracking. In arXiv preprint:2006.10214v1 2020.
[30] Tsung-Long Chen. iGolf: A Golf Swing Training System Prototype. In NYCU 2012.
指導教授 施國琛(Timothy K. Shih) 審核日期 2022-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明