這個計劃的目的是在三維重建子系統和可視化技術的人機交互系統發展的基礎上對多台顯示器屏 幕全景構建的虛擬環境。 常見的3D 模型3D 建模工具如3dsMax 中或Maya,難以用來繪製如人的規則表面形狀或相對應的 紋理映射。除此之外,使用僅僅一般2D 攝像機所獲取之傳統2D 影像資訊,由於無遠近之立體概念, 無法正確的描繪真實世界中之立體物件。為了解決這些問題,此計畫提出一個使用微軟深度攝影機 (RGB-D)資訊重建3D 人物模型之演算法。藉由微軟深度攝影機所提中之色彩與深度資訊,我們不僅能 完整呈現真實物件之點雲,並且能映射出彩色圖之紋理資訊,且本計畫著重於在一即時系統中重建三 圍物體。然而,僅僅一台深度攝影機是無法擷取出真實物件於360 度視角之所有物件資訊,本計畫預 計採用360 環景串接深度攝影機來快速的追蹤場景中真實之三維物件。 ;The aim of this project is to research and development of human-machine interaction system with novel tracking and 3D-reconstruction subsystem and visualization technology based on virtual environment with panoramic screen build on multiple monitors. Traditionally, professional 3D modeling tools such as 3dMax or Maya is being used for this purpose of 3D model construction. But it is difficult for an animator to draw an irregular surface like human shape or the proper texture mapping. Moreover the traditional 2D information acquired though the regular cameras does not depict the real world correctly. Addressing said in gaps this paper presents a result of a work on reconstructing a complete 3D human model based on the RGB-D data received from a regular 3D sensor Microsoft Kinect. Using the color and depth maps provided by the Kinect we could not only render the point cloud of an object but also map the texture captured by the color map. Main objective of this research proposal is to reconstruct 3D objects or scenes real time using Microsoft Kinect sensors. However one Kinect is unable to capture the whole object, as only a part of the information is visible to the camera. Hence looking for a solution to this drawback, multiple Kinects are simultaneously used in this research to capture a human object in 360°. Hence to track a human object in the scene quickly and precisely, it is necessary to test all the possible sizes of a human template, which is inefficient in processing time.