博碩士論文 945202021 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator鄭凱文zh_TW
DC.creatorKai-wen Chengen_US
dc.date.accessioned2007-7-18T07:39:07Z
dc.date.available2007-7-18T07:39:07Z
dc.date.issued2007
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=945202021
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract在虛擬實境的應用系統中,虛擬角色 (avatar) 是一個常見的功能;逼真的虛擬人物就要有虛擬臉部。虛擬臉部的主要表現就在於豐富的表情動作。但是為虛擬臉部加上逼真的表情動作是非常乏味且耗時。鑑於上述原因,我們希望能夠將建立好的動作 “重覆使用”,以節省時間與金錢。 我們提出一個可以複製臉部表情動作的方法,用來將一個已存在的臉部動作複製到另一個臉部上。每個臉部的特徵、五官大小、外形、及網格結構都不相同,但表情動作仍然能夠經過精密的計算而正確的複製及呈現出來;這就是一個重覆使用動作的技術。 被複製表情動作的臉部稱為來源臉部 (source face),而要複製到的臉部則稱為目標臉部 (target face)。我們的臉部表情動作是透過 “變形目標” (morph target) 記錄某個表情的所有臉部頂點與原來無表情臉部 (neutral face) 之對應點的位移向量。 我們的方法主要包含兩大步驟,首先是將做兩個臉部模型依五官位置做對應。過程中需要使用到人工定義的臉部特徵點,並將臉部模型投影到2D平面上,對特徵點做三角化,並透過計算質心座標的方式取得兩個模型之間的頂點對應關係。第二個步驟是複製動作,將來源臉部上所有的動作皆複製到目標臉部的正確位置上,並計算臉部五官的比例,以取得目標臉部的正確動作。另外,我們還希望系統能夠達到即時執行的效果;因此,我們也儘可能考慮快速的方法。zh_TW
dc.description.abstractIn the applications of virtual reality, virtual actors (avatars) are commonly used. The key part of virtual actors is just the virtual face. The principal function of virtual faces is facial expression; however, to render expressions on virtual faces is tedious and time-consuming. Thus we expect to develop an automatic system to “reuse” the existed facial expressions. In this study, we propose an approach of facial motion cloning, which is used to transfer pre-existed facial motions from one face to another. The face models have different characteristics, shapes, the scales of facial features, and so on, but expressions can still be accurately duplicated after precise computation of the scales of facial motions. The face that provides the original motions is called the “source face,” and the face that will be added on the copied motions is called the “target face.” Facial motions are represented by sets of “morph targets,” which record the displacement vectors of all face vertices between the neutral state and a particular motion. There are two major steps in the proposed system. The first step is to correspond the two face models according to their facial features. In the step, we must use the facial feature points. We project the face models onto a 2D plane, and then re-triangulate the model according to the feature points. The corresponding relationship of the vertices of the two face models is acquired by calculating the barycentric coordinate. The second step is to clone the facial motions. We duplicate the motions from the source face to the target face, and calculate the scale of facial features between the two faces to get the correct motion scale. The facial motion animation is expected to be demonstrated in real-time; thus we also consider the fast algorithms to develop the cloning system.en_US
DC.subject臉部動畫zh_TW
DC.subjectfacial motion animationen_US
DC.title立體臉部動作複製zh_TW
dc.language.isozh-TWzh-TW
DC.title3D Facial Motion Cloningen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明