近年來,虛擬現實的技術已經發展成熟,且已經被應用在各種不同的領域 上。使用者能夠藉由穿戴一些設備進入虛擬環境並和虛擬物件和其他使用者進 行互動。且因為 COVID-19,元宇宙也成為一個熱門的主題。元宇宙能夠幫助 被隔離的人們與其他人進行交流互動。但是,目前大多數的虛擬現實的設備是 手持的、穿戴式的或侵入式的。雖然這些設備能夠提供很好的性能,但它們可 能會讓一些使用者感到不舒服。 為了改善這個情況,在本篇論文中,我提出了一個結合了 MMpose、 Unity 遊戲引擎和反向動力學的系統,可以讓使用者控制他們的虛擬化身,在 虛擬環境中與虛擬物件和其他使用者互動與跳舞,且不需要手持、穿戴、或者 附加任何設備在他們身上。MMpose 是一個人體骨架偵測的模型,它可以藉由 單張的 RGB 輸入影像得到多人的 3D 人體關鍵點的資料,接著 Unity 接收這些 資料,最後再藉由反向動力學,使用者就可以控制他們的虛擬化身與虛擬環境 中的虛擬物件互動或者讓虛擬化身在虛擬環境中跳舞。這個方法讓使用者不必 再穿戴任何額外的裝置,只需要在相機前面做出相對應的姿勢,虛擬化身就可 以重現使用者的動作。;In recent years, virtual reality has already fully developed and has been used in many different fields. Users can use some devices to get into virtual space to interact with virtual objects and other users. Metaverse is also a popular topic because of COVID-19. It can help people who are quarantined to communicate with other people. But, most devices are handheld, wearable, or intrusive currently, and they may make some users feel uncomfortable though their performance are very good. To improve this situation, I propose a method that combine MMpose, Unity engine, and inverse kinematics to let user can control an avatar to interact with virtual objects and dance without holding, wearing, or attaching anything to their body. MMpose can get multi-people 3D keypoint data by single RGB image, then these data can be received in Unity. Finally, combine these data and inverse kinematics, user can control an avatar to interact with virtual objects or make avatar dance in Unity virtual space. This method can make users control avatars and interact with virtual objects, and they don’t need to wear or hold anything, only need to move their bodies in front of the camera.