本論文提出一種基於圖像放大率分布的多視角相機姿態估計方法,結合影像中交點放大率與幾何成像特性,透過非線性最小平方法估算相機相對姿態。在不依賴三維世界座標系的前提下,設計一組具旋轉對稱性的平面標定板圖樣,藉由放大率曲線在影像中的空間分布,反推出相機與標定板平面的相對位置與姿態。 方法核心在於利用厚透鏡模型之物像距離關係初始化相機姿態,並以影像中心與物空間中心定義放大率,建立非線性誤差模型後,透過 Levenberg-Marquardt 演算法優化使最小化實際與估計放大率之差異,並可視化損失函數進而獲得高精度的相機位姿估計結果。相較於傳統PnP方法,所提方法能在已知成像模型與相機參數情況下,提升估算穩定性並對像差具更高容錯性。 實驗結果顯示,本方法能有效還原相機在不同拍攝視角下的距離與角度變化,並且所估姿態與實際位移趨勢吻合,展現良好的適應性,適合用於多視角幾何成像的建構。 ;This thesis proposes a multi-view camera pose estimation method based on the distribution of image magnification, integrating thick lens imaging geometry and variation in magnification to estimate camera positions and orientations. Starting from a coarse 3D structure assumption, we construct a set of magnification-based geometric constraints and formulate an optimization process to infer camera pose from image magnification distributions and known projection geometry. The core of the method uses the thick lens model to describe the object-image distance relationship and camera pose, analyzing the relative magnification between each point and the reference center to build a constraint model. Then, using the Levenberg–Marquardt algorithm, the pose is iteratively refined by minimizing magnification errors, while also predicting the magnification curve. To improve the robustness of initial estimation, we use the PnP method to provide an initial pose under known 2D–3D correspondences. Experimental results show that the method can accurately estimate the relative pose between different camera viewpoints and significantly reduce estimation errors. The optimization process is robust and efficient, making it suitable for multi-view structure-from-motion tasks and dynamic scene applications.