綠色與永續交通技術的推動,電動公車正快速取代傳統燃油公車,成為城市公共運輸的重要工具。為確保營運效率,有效規劃其充電與排程作業成為一項關鍵課題。針對充電需求的複雜性與電力動態定價機制所帶來的挑戰,本文提出一套基於多代理深度強化學習 Multi-Agent Deep Reinforcement Learning(MADRL)的最佳化充電排程方法。本研究採用多代理延遲雙重深度確定性策略梯度 Multi-Agent Twin Delayed Deep Deterministic Policy Gradient(MATD3)架構,讓多輛電動公車能在不同時段、跨充電站情境下協同學習充電策略。研究中建構一個綜合型模擬實驗環境,融合能源消耗模型與歷史公車軌跡數據,用以模擬真實的電力使用行為與充電需求變化。在實驗模擬中,本研究從多元目標進行策略效能評估,涵蓋最小化電力成本、確保電池電量充足等層面。實驗結果顯示,本研究所提出之排程策略能顯著降低整體充電成本,並有效提升能源資源配置的效率。因此,本研究為智慧交通系統中自主充電決策提供一項可行且具應用潛力的解決方案。
關鍵詞:電動公車、充電排程、MATD3、多代理強化學習、智慧運輸系統 ;Green and sustainable vehicle technologies usher in rapid proliferation of electric buses in place of fuel-based buses for public transportation services. Efficient planning for scheduling and charging operations is crucial. To address the complexity of charging demand and dynamic pricing of electricity use, this thesis proposes an optimized charging scheduling scheme based on Multi-Agent Deep Reinforcement Learning (MADRL). The scheme is developed using the Multi-Agent Twin Delayed Deep Deterministic Policy Gradient (MATD3) framework, so that multiple electric buses can collaboratively learn charging strategies across charging stations during different time periods. Our study constructs a synthetical experimental environment, which incorporates actual profiles of energy consumption and historical vehicle trajectories, to simulate realistic power usage and charging behaviors. In the simulation experiments, this study evaluates the effectiveness of the proposed strategy from multiple objectives, including minimizing electricity costs and ensuring sufficient battery levels. Experimental results demonstrate that the proposed scheme significantly reduces overall charging costs and improves the efficiency of energy resource allocation. Therefore, this study provides a feasible solution for autonomous charging decision-making in intelligent transportation systems.