English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 83776/83776 (100%)
造訪人次 : 59279790      線上人數 : 1188
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/98227


    題名: 航班地勤作業人力需求之研究-基於DQN 在地勤人力調度的應用;A study on the manpower requirements of flight ground handling operations Application of DQN in ground manpower scheduling
    作者: 林純琿;Lin, Chun-Hun
    貢獻者: 資訊管理學系在職專班
    關鍵詞: 地勤作業;強化學習;排班問題;人力調度;ground operations;reinforcement learning;DQN;scheduling problems
    日期: 2025-06-27
    上傳時間: 2025-10-17 12:31:10 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著 COVID-19 疫情後全球航空市場快速復甦,航班量顯著增加,造成機場地勤作業面臨人力需求劇增與調度壓力。傳統依賴經驗法則與固定規則的派遣方式,已難以因應航班密度提升、作業變動頻繁等營運挑戰。為提升地勤人力配置效率與作業準確性,本研究提出一套應用深度強化學習(Deep Reinforcement Learning, DRL)中 Deep Q-Network(DQN)之智慧派遣模型 FGHD-DQN SYSTEM(Flight Ground Handling Dispatch DQN SYSTEM),用以模擬地勤派遣決策並優化人力使用。
    本研究運用實際地勤作業資料建構模擬環境,將每架航班之特徵轉換為固定維度狀態向量,並定義以「增派/減派/維持」為核心的人力調整行動。模型採用 TD Learning 為訓練基礎,結合經驗回放(Experience Replay)、目標網路(Target Network)與 ε-greedy 探索策略,透過試錯機制學習最適派遣策略。獎勵設計則同時考量作業準時性、人力配置合理性與成本效率,引導代理人做出具實務意義的派遣決策。實驗結果顯示,所提出之 FGHD-DQN SYSTEM 模型能有效因應高變動航班情境,動態調整人力配置,有效提升作業準時率、並降低人力閒置或過載情形,展現良好的應變能力與實務應用潛力。研究成果可提供機場營運單位建構智慧化排班系統作為參考。;With the rapid recovery of the global aviation industry after the COVID-19 pandemic, the sharp increase in flight volume has imposed significant pressure on ground handling operations, especially in workforce scheduling. Traditional rule-based and experience-driven dispatching methods are no longer adequate to cope with the growing density and variability of flight schedules.
    To address this challenge, this study proposes an intelligent dispatching model named FGHD-DQN SYSTEM (Flight Ground Handling Dispatch Deep Q-Network SYSTEM), which applies Deep Reinforcement Learning (DRL) techniques—specifically Deep Q-Network (DQN)—to simulate ground handling decisions and optimize workforce allocation.
    Using real-world operational data from a major ground handling company, the model constructs a simulation environment by transforming each flight′s operational features into fixed-dimension state vectors. The action space is defined around three core decisions: increasing, decreasing, or maintaining current workforce levels. The model is trained based on Temporal Difference (TD) Learning, incorporating experience replay, target network updates, and the ε-greedy strategy to iteratively learn optimal dispatching policies. The reward function considers three key dimensions: operational timeliness, workforce allocation efficiency, and cost effectiveness, guiding the agent to make practical and adaptive decisions. Experimental results demonstrate that the proposed FGHD-DQN SYSTEM effectively adapts to highly dynamic flight scenarios, enhances operational timeliness, and reduces both workforce idleness and overload. These findings confirm the model’s applicability in real-world workforce scheduling and offer valuable insights for developing intelligent scheduling systems in airport operations.
    顯示於類別:[資訊管理學系碩士在職專班 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML9檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明