博碩士論文 110522144 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator李元熙zh_TW
DC.creatorLI-YUAN-SIen_US
dc.date.accessioned2023-7-12T07:39:07Z
dc.date.available2023-7-12T07:39:07Z
dc.date.issued2023
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=110522144
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract手語是一種視覺交流形式,它依靠手勢、面部表情和肢體語言的組合來傳達意義。 全世界數以百萬計的失聰或聽障人士以及與他們交流的人每天都在使用它。 然而,儘管它很重要,但由於手語的複雜性和可變性,手語識別和翻譯仍然是一項具有挑戰性的任務。 近年來,計算機視覺和技術越來越多地應用於手語識別和翻譯,並取得了好的成果。 在這項工作中,我們介紹了一種基於三維身體建模和虛擬試衣的手語顯示系統。 我們的方法涉及使用身體網格估計來生成手語者的 3D 人體模型,然後將其用作多服裝網絡的輸入以模擬手語者衣服的外觀。 我們收集了包含 100 個手語影片的資料集,每個影片都有不同的手語者表演一系列手語。 為了使用這些影片,我們首先使用 YOLOv5 裁剪出手語者以創建更好的環境來進行人體網格估計。 並使用旨在提高手腕旋轉精度的身體網格估計算法從每個影片中提取手語者的身體模型,然後應用虛擬試穿的方法在手語者身上模擬不同類型的服裝。 之後,我們得到了一個姿勢和形狀與原始手語者相同的虛擬人物模型,其衣服是從衣裝資料集中選擇的。 我們將這些模型一幀一幀地組合起來,生成一個影片,該影片顯示了一個虛擬人體模型穿著虛擬服裝演示手語。zh_TW
dc.description.abstractSign language is a form of visual communication that relies on a combination of hand gestures, facial expressions, and body language to convey meaning. Millions of individuals worldwide who are deaf or hard of hearing, as well as by those who communicate with them utilize it on a daily basis. However, despite its importance, sign language recognition and translation remains a challenging task since the complexity and variability of sign language. In recent years, computer vision and technique has been increasingly applied to sign language recognition and translation, with promising results. In this work, we introduce a sign language display system, based on three-dimensional body modeling[1] and virtual try-on[2]. Our approach involves using body mesh estimation to generate a 3D human model of the signer, which is then used as input to a multi-garment network[2] to simulate the appearance of clothing on the signer. We collected a dataset of 100 sign language videos, each featuring a different signer performing a range of signs. To use these videos, we firstly use YOLOv5[17] to crop out the signer to create a better environment to do human mesh estimation. And used body mesh estimation algorithms which aims to improve the accuracy of wrist rotation to extract the signer′s body model from each video, and then applied a virtual try-on method to simulate different types of clothing on the signer. Afterwards, we got a virtual human model whose pose and shape is same as the original signer, and its clothes is select from a cloth dataset. We combined these model frame by frame to generate a video which shows a virtual human model with virtual clothes acting sign language.en_US
DC.subject虛擬試衣zh_TW
DC.subject人體建模zh_TW
DC.subject手語zh_TW
DC.title基於3D全身人體追蹤及虛擬試衣之手語展示系統zh_TW
dc.language.isozh-TWzh-TW
DC.titleSign Language Display System Based on 3D Body Tracking and Virtual Try-onen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明