隨著 COVID-19 疫情後全球航空市場快速復甦,航班量顯著增加,造成機場地勤作業面臨人力需求劇增與調度壓力。傳統依賴經驗法則與固定規則的派遣方式,已難以因應航班密度提升、作業變動頻繁等營運挑戰。為提升地勤人力配置效率與作業準確性,本研究提出一套應用深度強化學習(Deep Reinforcement Learning, DRL)中 Deep Q-Network(DQN)之智慧派遣模型 FGHD-DQN SYSTEM(Flight Ground Handling Dispatch DQN SYSTEM),用以模擬地勤派遣決策並優化人力使用。 本研究運用實際地勤作業資料建構模擬環境,將每架航班之特徵轉換為固定維度狀態向量,並定義以「增派/減派/維持」為核心的人力調整行動。模型採用 TD Learning 為訓練基礎,結合經驗回放(Experience Replay)、目標網路(Target Network)與 ε-greedy 探索策略,透過試錯機制學習最適派遣策略。獎勵設計則同時考量作業準時性、人力配置合理性與成本效率,引導代理人做出具實務意義的派遣決策。實驗結果顯示,所提出之 FGHD-DQN SYSTEM 模型能有效因應高變動航班情境,動態調整人力配置,有效提升作業準時率、並降低人力閒置或過載情形,展現良好的應變能力與實務應用潛力。研究成果可提供機場營運單位建構智慧化排班系統作為參考。;With the rapid recovery of the global aviation industry after the COVID-19 pandemic, the sharp increase in flight volume has imposed significant pressure on ground handling operations, especially in workforce scheduling. Traditional rule-based and experience-driven dispatching methods are no longer adequate to cope with the growing density and variability of flight schedules. To address this challenge, this study proposes an intelligent dispatching model named FGHD-DQN SYSTEM (Flight Ground Handling Dispatch Deep Q-Network SYSTEM), which applies Deep Reinforcement Learning (DRL) techniques—specifically Deep Q-Network (DQN)—to simulate ground handling decisions and optimize workforce allocation. Using real-world operational data from a major ground handling company, the model constructs a simulation environment by transforming each flight′s operational features into fixed-dimension state vectors. The action space is defined around three core decisions: increasing, decreasing, or maintaining current workforce levels. The model is trained based on Temporal Difference (TD) Learning, incorporating experience replay, target network updates, and the ε-greedy strategy to iteratively learn optimal dispatching policies. The reward function considers three key dimensions: operational timeliness, workforce allocation efficiency, and cost effectiveness, guiding the agent to make practical and adaptive decisions. Experimental results demonstrate that the proposed FGHD-DQN SYSTEM effectively adapts to highly dynamic flight scenarios, enhances operational timeliness, and reduces both workforce idleness and overload. These findings confirm the model’s applicability in real-world workforce scheduling and offer valuable insights for developing intelligent scheduling systems in airport operations.