English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41636864      線上人數 : 1156
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93052


    題名: 基於3D全身人體追蹤及虛擬試衣之手語展示系統;Sign Language Display System Based on 3D Body Tracking and Virtual Try-on
    作者: 李元熙;LI-YUAN-SI
    貢獻者: 資訊工程學系
    關鍵詞: 虛擬試衣;人體建模;手語
    日期: 2023-07-12
    上傳時間: 2024-09-19 16:39:55 (UTC+8)
    出版者: 國立中央大學
    摘要: 手語是一種視覺交流形式,它依靠手勢、面部表情和肢體語言的組合來傳達意義。 全世界數以百萬計的失聰或聽障人士以及與他們交流的人每天都在使用它。 然而,儘管它很重要,但由於手語的複雜性和可變性,手語識別和翻譯仍然是一項具有挑戰性的任務。

    近年來,計算機視覺和技術越來越多地應用於手語識別和翻譯,並取得了好的成果。 在這項工作中,我們介紹了一種基於三維身體建模和虛擬試衣的手語顯示系統。 我們的方法涉及使用身體網格估計來生成手語者的 3D 人體模型,然後將其用作多服裝網絡的輸入以模擬手語者衣服的外觀。

    我們收集了包含 100 個手語影片的資料集,每個影片都有不同的手語者表演一系列手語。 為了使用這些影片,我們首先使用 YOLOv5 裁剪出手語者以創建更好的環境來進行人體網格估計。 並使用旨在提高手腕旋轉精度的身體網格估計算法從每個影片中提取手語者的身體模型,然後應用虛擬試穿的方法在手語者身上模擬不同類型的服裝。 之後,我們得到了一個姿勢和形狀與原始手語者相同的虛擬人物模型,其衣服是從衣裝資料集中選擇的。 我們將這些模型一幀一幀地組合起來,生成一個影片,該影片顯示了一個虛擬人體模型穿著虛擬服裝演示手語。;Sign language is a form of visual communication that relies on a combination of hand gestures, facial expressions, and body language to convey meaning. Millions of individuals worldwide who are deaf or hard of hearing, as well as by those who communicate with them utilize it on a daily basis. However, despite its importance, sign language recognition and translation remains a challenging task since the complexity and variability of sign language.

    In recent years, computer vision and technique has been increasingly applied to sign language recognition and translation, with promising results. In this work, we introduce a sign language display system, based on three-dimensional body modeling[1] and virtual try-on[2]. Our approach involves using body mesh estimation to generate a 3D human model of the signer, which is then used as input to a multi-garment network[2] to simulate the appearance of clothing on the signer.

    We collected a dataset of 100 sign language videos, each featuring a different signer performing a range of signs. To use these videos, we firstly use YOLOv5[17] to crop out the signer to create a better environment to do human mesh estimation. And used body mesh estimation algorithms which aims to improve the accuracy of wrist rotation to extract the signer′s body model from each video, and then applied a virtual try-on method to simulate different types of clothing on the signer. Afterwards, we got a virtual human model whose pose and shape is same as the original signer, and its clothes is select from a cloth dataset. We combined these model frame by frame to generate a video which shows a virtual human model with virtual clothes acting sign language.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML13檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明