中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/89823
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41245663      線上人數 : 109
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89823


    題名: 基於深度學習手勢辨識之元宇宙虛擬會議互動系統;An Interactive System for Metaverse Virtual Conference Based on Deep Learning Gesture Recognition
    作者: 鄭棕升;Tee, Chong Sheng
    貢獻者: 資訊工程學系
    關鍵詞: 手勢辨識;元宇宙;虛擬會議;虛擬實境;手部追蹤;深度學習;gesture recognition;Metaverse;virtual conference;virtual reality;hand tracking;deep learning
    日期: 2022-07-27
    上傳時間: 2022-10-04 12:01:09 (UTC+8)
    出版者: 國立中央大學
    摘要: 深度學習的技術已於世界廣汎人知,人們將這些技術開發出數個項目,並運用在各種領域的應用上。現今,人們也逐漸利用虛擬實境 (VR) 技術配合深度學習,研發出一些實用工具或系統,以讓使用者進行特定的任務。引入虛擬實境技術,是希望能夠減輕使用者的負擔,給予良好的使用者體驗,還有突破某些場景下的使用限制。在本文中,我們提出了元宇宙虛擬會議互動系統。我們只使用了一個單目RGB攝影機作爲系統的輸入裝置。此系統使用了深度學習中手勢辨識的技術。此系統利用了MediaPipe框架作爲手部追蹤的系統核心架構。我們將從手部追蹤的部分獲得的手部關節數據傳到虛擬環境,然後在虛擬環境中的3D手部模型對應到經過處理的手部關節數據。在映射的處理上,我們需使用反向運動學 (IK) 的概念和一些演算法,才能將3D手部模型做出對應的行爲或方向轉換。由於被預測出來的z軸關節點數據只表示與手部之間的關鍵點有關聯,所以我們需要加入一些條件讓3D手部關鍵點與虛擬環境有關聯性。爲了解決3D手部模型在每一幀中會不斷抖動的問題,我們使用了平滑算法,以提升3D手部模型在虛擬環境中的穩定性。此系統的主要功能有2項:虛擬鍵盤和虛擬手寫。通過這些功能所輸入的文本或圖案能被記錄在3D虛擬空間中的便利貼上。我們透過物件碰撞的概念來作爲與虛擬物件互動的方法。最後,系統評估表明使用者無需使用VR控制器就能與虛擬物件進行互動。只需隨意移動手部或彎曲手指就能讓3D手部模型做出類似的動作。這項以單目RGB攝影機實施VR虛擬實境體驗之技術在系統上有一定的實用性和穩定性。;The technology of deep learning has been widely known in the world, and people have developed these technologies into several projects and applied them in various fields. Nowadays, people are gradually using virtual reality (VR) technology with deep learning to develop practical tools or systems to allow users to perform specific tasks. The introduction of virtual reality technology is to reduce the burden on users, provide a good user experience, and break through the limitations of use in certain scenarios. In this paper, we propose an interactive system for Metaverse virtual conference. We only used a monocular RGB camera as the input device for the system. This system uses the techniques of gesture recognition in deep learning. This system utilizes the MediaPipe framework as the core architecture of the hand tracking system. We transfer the hand joint data obtained from the hand tracking part to the virtual environment, and then the 3D hand model in the virtual environment corresponds to the processed hand joint data. In the process of mapping, we need to use the concept of inverse kinematics (IK) and some algorithms to convert the 3D hand model to the corresponding behavior or transform. Since the predicted z-axis joint point data only indicates that it is related to the keypoints between the hand, we need to add some conditions to make the 3D hand keypoints related to the virtual environment. To solve the problem that the 3D hand model keeps shaking in each frame, we use a smoothing algorithm to improve the stability of the 3D hand model in the virtual environment. There are 2 main functions in this system: virtual keyboard and virtual handwriting. Text or patterns entered through these functions can be recorded on post-it notes in a 3D virtual space. We use the concept of object collision as a way to interact with virtual objects. Finally, system evaluations show that users can interact with virtual objects without using a VR controller. The 3D hand model can perform similar actions by simply moving the hand or bending the fingers. This technology of implementing VR experience with a monocular RGB camera has certain practicality and stability on the system.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML56檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明