深度強化學習(Deep Reinforcement Learning, DRL)在解決複雜決策問題方 面展現了顯著的成果,然而在獎勵稀疏或存在多種最適策略的環境中,常面臨 行為缺乏多樣性的問題。為了解決此限制,我們提出自適應獎勵切換策略優化 (Adaptive Reward-Switching Policy Optimization, ARPO),這是一種軌跡層級 的過濾框架,可根據代理在訓練過程中的表現趨勢動態調整其新穎性門檻。 ARPO 建立在 RSPO(Reward-Switching Policy Optimization)基礎上,使用負對 數似然平均(NLL mean)作為行為相似性衡量指標,並依據獎勵變化自動調整 過濾條件。 這種自適應機制使代理在學習停滯時能促進多樣化探索,而在獎勵提升時 則專注於策略精煉。我們在具有動態危險與誤導性獎勵的迷宮環境中評估 ARPO 的表現,並與包括 PPO、DvD、SMERL 及 RSPO 等基準方法進行比較。 實驗結果顯示,ARPO 在獎勵獲取、行為多樣性與適應能力方面均有更佳表 現。此研究凸顯了自適應新穎性過濾機制在培養具策略性與策略多樣性的強化 學習代理中所扮演的重要角色。;Deep reinforcement learning (DRL) has demonstrated notable success in solving complex decision-making problems, yet it often suffers from lack of diversity in environments with sparse rewards or multiple optimal strategies. To overcome this limitation, we propose Adaptive Reward-Switching Policy Optimization (ARPO), a trajectory-level filtering framework that dynamically adjusts its novelty threshold according to the agent’s performance trends. ARPO builds upon the Reward-Switching Policy Optimization (RSPO) paradigm by using NLL mean as a behavioral similarity measure and adapts the filtering threshold based on reward dynamics during training. This adaptive mechanism enables the agent to promote diverse exploration when progress stagnates and focus on policy refinement when rewards improve. We evaluate ARPO in challenging maze environments with dynamic hazards and deceptive rewards, comparing its performance with baseline methods including PPO, DvD, SMERL, and RSPO. Experimental results demonstrate that ARPO achieves higher reward acquisition, greater behavioral diversity, and improved adaptability. This work highlights the importance of adaptive novelty filtering in developing robust and strategically diverse reinforcement learning agents.