本研究發展一套多視角相機群組空間座標系統整合的流程。基於輪廓法(Shape-from-Silhouette, SFS)方向進行三維模型重建時,通常會利用旋轉平台輔助拍攝,也因為旋轉平台的緣故,於拍攝欲建模之物件時,其頂部與底部往往會因為資訊獲得不足甚至無法取得,導致於重建三維模型時產生假面,造成三維模型與實際物件外形產生落差。 本研究透過改變物件放置於旋轉盤上的方式,拍攝物件不同角度之影像,以補足物件頂部、底部甚至其他角度特徵之資訊,並利用所有資訊重建三維模型,使其能夠更接近物件之外表形貌。拍攝環境之空間座標系統為透過校正物進行建立,因此改變放置方式進行拍攝之輔助視角資訊必須依附初次拍攝之主要視角所建立的座標系統,故本研究開發一套影像匹配對位法(Alignment by Image Matching, AIM)進行座標系統的整合。藉由計算原始三維模型之投影影像以及輔助視角物件影像之間的空間關係,進而換算各視角相機群組於三維空間中之轉換關係,便可利用更充足之資訊重建三維模型。 本論文最後舉出三個不同的範例,利用本研究提出之多視角相機群組空間座標系統整合流程以及AIM方法,將多組輔助視角的相機群組資訊進行整合,並輸入至應用端進行三維模型的重建,以此驗證本研究之正確性及可行性。;This study develops a process of the integration of coordinate systems from multi-view camera groups for shape-from-silhouette (SFS) technique. The popular 3D modeling technique which based on the SFS method usually through the rotatory table to obtain geometry and color information of object. However, the rotatory table only rotate in one axis, and it causes that the object has the limitation of the shooting angle especially at the top/bottom view. In SFS method, this limitation leads the artifacts of 3D model generated at the top/bottom. If the object can tip over, reposition on the rotatory table, and retake the images, the missing information of 3D model from top/bottom view could be replenished. In order to integrate the entire silhouette data taken from different views into a single coordinate system, this study develops an alignment by image matching (AIM) algorithm to establish the spatial distribution of all camera positions. In this algorithm, the silhouette data obtained in tipped positions is setting as targets. The 3D model transforms into a predicted positon to simulate one of tipped positions and projects the shape onto the imaging plane of the camera to obtain the predicted silhouette data as a subject. Then, this subject silhouette data will make the comparison with corresponding target. The AIM algorithm used to minimize the difference between these two data and calculate the corresponding translation and rotation of the subject needed to adjust in 3D space. When the sum of differences in all tipped positions is minimum, all camera position (in auxiliary views) can integrate into a coordinate system of primary view. A complete 3D model can be rebuilt by the SFS method with all silhouette data in all views. At last, this study will demonstrate three examples which were rebuilt by the development of process of the integration of coordinate systems from multi-view camera groups for shapefrom- silhouette technique to verify our proposed process.