中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/9387
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78818/78818 (100%)
Visitors : 34653754      Online Users : 989
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/9387


    Title: 立體臉部動作複製;3D Facial Motion Cloning
    Authors: 鄭凱文;Kai-wen Cheng
    Contributors: 資訊工程研究所
    Keywords: 臉部動畫;facial motion animation
    Date: 2007-06-29
    Issue Date: 2009-09-22 11:46:42 (UTC+8)
    Publisher: 國立中央大學圖書館
    Abstract: 在虛擬實境的應用系統中,虛擬角色 (avatar) 是一個常見的功能;逼真的虛擬人物就要有虛擬臉部。虛擬臉部的主要表現就在於豐富的表情動作。但是為虛擬臉部加上逼真的表情動作是非常乏味且耗時。鑑於上述原因,我們希望能夠將建立好的動作 “重覆使用”,以節省時間與金錢。 我們提出一個可以複製臉部表情動作的方法,用來將一個已存在的臉部動作複製到另一個臉部上。每個臉部的特徵、五官大小、外形、及網格結構都不相同,但表情動作仍然能夠經過精密的計算而正確的複製及呈現出來;這就是一個重覆使用動作的技術。 被複製表情動作的臉部稱為來源臉部 (source face),而要複製到的臉部則稱為目標臉部 (target face)。我們的臉部表情動作是透過 “變形目標” (morph target) 記錄某個表情的所有臉部頂點與原來無表情臉部 (neutral face) 之對應點的位移向量。 我們的方法主要包含兩大步驟,首先是將做兩個臉部模型依五官位置做對應。過程中需要使用到人工定義的臉部特徵點,並將臉部模型投影到2D平面上,對特徵點做三角化,並透過計算質心座標的方式取得兩個模型之間的頂點對應關係。第二個步驟是複製動作,將來源臉部上所有的動作皆複製到目標臉部的正確位置上,並計算臉部五官的比例,以取得目標臉部的正確動作。另外,我們還希望系統能夠達到即時執行的效果;因此,我們也儘可能考慮快速的方法。 In the applications of virtual reality, virtual actors (avatars) are commonly used. The key part of virtual actors is just the virtual face. The principal function of virtual faces is facial expression; however, to render expressions on virtual faces is tedious and time-consuming. Thus we expect to develop an automatic system to “reuse” the existed facial expressions. In this study, we propose an approach of facial motion cloning, which is used to transfer pre-existed facial motions from one face to another. The face models have different characteristics, shapes, the scales of facial features, and so on, but expressions can still be accurately duplicated after precise computation of the scales of facial motions. The face that provides the original motions is called the “source face,” and the face that will be added on the copied motions is called the “target face.” Facial motions are represented by sets of “morph targets,” which record the displacement vectors of all face vertices between the neutral state and a particular motion. There are two major steps in the proposed system. The first step is to correspond the two face models according to their facial features. In the step, we must use the facial feature points. We project the face models onto a 2D plane, and then re-triangulate the model according to the feature points. The corresponding relationship of the vertices of the two face models is acquired by calculating the barycentric coordinate. The second step is to clone the facial motions. We duplicate the motions from the source face to the target face, and calculate the scale of facial features between the two faces to get the correct motion scale. The facial motion animation is expected to be demonstrated in real-time; thus we also consider the fast algorithms to develop the cloning system.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File SizeFormat


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明