博碩士論文 945202021 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:56 、訪客IP:3.16.212.32
姓名 鄭凱文(Kai-wen Cheng)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 立體臉部動作複製
(3D Facial Motion Cloning)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在虛擬實境的應用系統中,虛擬角色 (avatar) 是一個常見的功能;逼真的虛擬人物就要有虛擬臉部。虛擬臉部的主要表現就在於豐富的表情動作。但是為虛擬臉部加上逼真的表情動作是非常乏味且耗時。鑑於上述原因,我們希望能夠將建立好的動作 “重覆使用”,以節省時間與金錢。
我們提出一個可以複製臉部表情動作的方法,用來將一個已存在的臉部動作複製到另一個臉部上。每個臉部的特徵、五官大小、外形、及網格結構都不相同,但表情動作仍然能夠經過精密的計算而正確的複製及呈現出來;這就是一個重覆使用動作的技術。
被複製表情動作的臉部稱為來源臉部 (source face),而要複製到的臉部則稱為目標臉部 (target face)。我們的臉部表情動作是透過 “變形目標” (morph target) 記錄某個表情的所有臉部頂點與原來無表情臉部 (neutral face) 之對應點的位移向量。
我們的方法主要包含兩大步驟,首先是將做兩個臉部模型依五官位置做對應。過程中需要使用到人工定義的臉部特徵點,並將臉部模型投影到2D平面上,對特徵點做三角化,並透過計算質心座標的方式取得兩個模型之間的頂點對應關係。第二個步驟是複製動作,將來源臉部上所有的動作皆複製到目標臉部的正確位置上,並計算臉部五官的比例,以取得目標臉部的正確動作。另外,我們還希望系統能夠達到即時執行的效果;因此,我們也儘可能考慮快速的方法。
摘要(英) In the applications of virtual reality, virtual actors (avatars) are commonly used. The key part of virtual actors is just the virtual face. The principal function of virtual faces is facial expression; however, to render expressions on virtual faces is tedious and time-consuming. Thus we expect to develop an automatic system to “reuse” the existed facial expressions.
In this study, we propose an approach of facial motion cloning, which is used to transfer pre-existed facial motions from one face to another. The face models have different characteristics, shapes, the scales of facial features, and so on, but expressions can still be accurately duplicated after precise computation of the scales of facial motions.
The face that provides the original motions is called the “source face,” and the face that will be added on the copied motions is called the “target face.” Facial motions are represented by sets of “morph targets,” which record the displacement vectors of all face vertices between the neutral state and a particular motion.
There are two major steps in the proposed system. The first step is to correspond the two face models according to their facial features. In the step, we must use the facial feature points. We project the face models onto a 2D plane, and then re-triangulate the model according to the feature points. The corresponding relationship of the vertices of the two face models is acquired by calculating the barycentric coordinate. The second step is to clone the facial motions. We duplicate the motions from the source face to the target face, and calculate the scale of facial features between the two faces to get the correct motion scale. The facial motion animation is expected to be demonstrated in real-time; thus we also consider the fast algorithms to develop the cloning system.
關鍵字(中) ★ 臉部動畫 關鍵字(英) ★ facial motion animation
論文目次 摘要 II
誌謝 III
目錄 IV
第一章 緒論 1
第二章 相關研究 2
第三章 臉部對應 3
第四章 計算目標臉部頂點的位移向量 4
第五章 實驗 5
第六章 結論 6
附 錄 英文版論文 7
參考文獻 [1] Escher, M. and N. Magnenat-Thalmann, “Automatic 3D cloning and real-time animation of a human face,” in Proc. of Computer Animation ’97 Conf., Geneva, Switzerland, Jun.5-6, 1997, pp.58-66.
[2] Escher, M., I. Pandzic, and N. Magnenat-Thalmann, “Facial deformations for MPEG-4,” in Proc. of Computer Animation ’98 Conf., Philadelphia, Jun.8-10, 1998, pp.138-145.
[3] Fang, S., R. Raghavan, and J. Richtsmeier, “Volume morphing methods for landmark based 3D image deformation,” in Proc. SPIE Int. Symposium on Medical Imaging, Newport Beach, CA, Apr. 1996, pp.404-415.
[4] Fratarcangeli, M., “Physically based synthesis of animatable face models,” in Proc. of the 2nd Workshop in Virtual Reality Interaction and Physical Simulations, Pisa, Italy, Nov. 2005. pp.32-39.
[5] Fratarcangeli, M. and M. Schaerf, “Fast facial motion cloning in MPEG-4,” in Proc. of the 4th Int. Symposium on Image and Signal Processing and Analysis, Zagreb, Croatia, Sep.15-17, 2005, pp.310-315.
[6] Guenter, B., C. Grimm, D. Wood, H. Malvar and F. Pighin, “Making faces,” in Proc. of the 25th Annual Conf. on Computer Graphics and Interactive Techniques, New York, NY, Jul. 1998, pp.55-66.
[7] Kalral, P., A. Managili, and N. Magnenat-Thalmann, and Daniel Thalmann, “Simulation of facial muscle actions based on rational free form deformations,” in Proc. of Eurographics ‘92 Conf., Cambridge, UK, Sep.7-11, 1992, pp.65-69.
[8] Kshirsagar, S., S. Garchery, and N. Magnenat-Thalmann, “Feature point based mesh deformation applied to MPEG-4 facial animation,” in IFIP TC5/WG5.10 Deform and AVATAR 2000 Workshop on Deformable Avatars, Geneva and Lausanne, Switzerland, Nov.29-Dec.1, 2001, pp.24-34.
[9] Lavagetto, F., R. Pockaj, “The facial animation engine: Toward a high-level interface for the design of MPEG-4 compliant animated faces,” IEEE Trans. on Circuits and Systems for Video Technology, vol.9, no.2, pp.277-289, May 1999.
[10] Liao, C.-Y., Facial Modeling and Animation based on Muscle and Skull, Master thesis, Computer Science and Information Engineering Dept., Univ. of Tsing-Hua, Hisn-Chu, Taiwan, 2002.
[11] Mani, M. and J. Ostermann, “Cloning of MPEG-4 face models,” in Int. Workshop on Very Low Bit rate Video coding (VLBV01), Athens, Greece, Oct.11-12, 2001. pp.206.
[12] Morishima, S., “Face analysis and synthesis,” IEEE Signal Processing Magazine, vol.18, no.3, pp.26-34, May 2001.
[13] Noh, J. and U. Neumann, “Expression cloning,” in ACM SIGGRAPH, Los Angles, CA, Aug.12-17, 2001, pp.277-288.
[14] Pandzic, I., “Facial motion cloning,” Journal of Graphical Models, vol.65, no.6, pp.385-404, Sep. 2001.
[15] Parke, F. I., “Computer generated animation of faces,” in Proc. of the ACM Annual Conf., Boston, MA, Aug.1, 1972, pp.451-457.
[16] Pasquariello, S. and C. Pelachaud, “Greta: a simple facial animation engine,” in Proc. of the 6th Online World Conf. Soft Computing in Industrial Applications, Sep.10-24, 2001.
[17] Pereira, F. and T. Ebrahimi, The MPEG-4 Book, Prentice Hall PTR, New Jersey, 2002.
[18] Sumner, R. W. and J. Popovic, “Deformation transfer for triangle meshes,” in ACM SIGGRAPH, Los Angles, CA, Aug.8-12, 2004, pp.399-405.
指導教授 曾定章(Din-chang Tseng) 審核日期 2007-7-18
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明