本文提出了一種新穎的整合方法,以應對基於神經網路學習的單眼定位所面臨的挑戰,該方法結合了絕對姿態回歸(Absolute Pose Regression, APR)和相對姿態回歸(Relative Pose Regression, RPR)的優勢。我們引入了一種在變分貝葉斯推斷框架內使用擴展卡爾曼濾波器(Extended Kalman Filter, EKF)來整合預測的絕對和相對姿態的理論一致策略(A Learning-Based Monocular Positioning with Variational Bayesian Extended Kalman Filter Integration, VKFPos)。本方法的一個重要機制是在訓練過程中考慮了姿態協方差,使我們的模型能夠有效地建模與每個預測姿態相關的不確定性。在 7-Scenes 和 Oxford RobotCar 等室內外數據集上的實驗結果顯示,我們的單影像定位方法在準確度上與最先進的方法相媲美。此外,在考慮軌跡時序時的定位方面,VKFPos 相對於現有方法展示了更高的準確度,室內數據集至少提高了 10%,而在具有挑戰性的室外數據集上至少提高了 42%。總之,VKFPos 提供了一個穩健可靠的解決方案,展示了其在各種環境和情境中的有效性。;This paper addresses the challenges in learning-based monocular positioning by proposing a novel integration approach that combines the strengths of Absolute Pose Regression (APR) and Relative Pose Regression (RPR). We introduce a theoretically consistent strategy for integrating predicted absolute and relative poses using the Extended Kalman Filter (EKF) within the framework of variational Bayesian inference, called VKFPos. An essential aspect of our method is the consideration of pose covariance during training, enabling our branches to effectively model the uncertainty associated with each predicted pose. Experimental results on both indoor and outdoor datasets, namely 7-Scenes and Oxford RobotCar, demonstrate that our single-shot method achieves comparable accuracy with state-of-the-art methods. Moreover, in temporal positioning, VKFPos demonstrates superior accuracy compared to existing methods, achieving a remarkable improvement of at least $10\%$ across indoor datasets and at least $42\%$ in challenging outdoor datasets. In summary, VKFPos offers a robust and reliable solution, demonstrating its effectiveness across diverse environments and scenarios.