English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 84432/84432 (100%)
造訪人次 : 65814720      線上人數 : 152
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/99380


    題名: 基於Kubernetes 自動擴縮機制之整合改善後的 Prophet-LSTM 方法:優化流量高峰預測與資源使 用效率;Integrated Improved Prophet-LSTM Method Based on Kubernetes Autoscaling Mechanisms: Optimizing Peak Traffic Prediction and Resource Efficiency
    作者: 哈元泰;Ha, Yuan-Tai
    貢獻者: 資訊工程學系在職專班
    關鍵詞: Kubernetes;自動擴縮;HPA;KEDA;Prophet;LSTM;特徵工程;尾端延遲;資源效率;預測式擴縮;Kubernetes;autoscaling;HPA;KEDA;Prophet;LSTM;proactive scaling;feature engineering;tail latency;resource efficiency
    日期: 2025-10-27
    上傳時間: 2026-03-06 18:51:02 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來雲端服務經常面臨被事件所驅動的高流量負載,傳統反應式的 Kubernetes Horizontal Pod Autoscaler(HPA)容易在流量高峰來臨時出現延遲。本篇論文針對既有的 Prophet 與 LSTM 混合模型,提出一系列以特徵工程為核心的改進作法。我們透過趨勢與季節性分解後的殘差學習,並引入滯後、差分、事件時間標註與交互項等特徵,顯著強化了模型對於流量尖峰的感知與擬合能力。時間序列交叉驗證結果顯示,此方法在 NASA 資料集上展現穩定的泛化能力,但在 FIFA 資料集面對未見過的高波動模式時則面臨挑戰;然而,在最終測試集上的評估證實了特徵工程的整體有效性。在系統整合方面,本研究設計並實現了一套可部署的主動式擴縮架構,整合 Prometheus、KEDA 與 Kubernetes 原生元件。改良後的模型將短期預測 RPS 輸出為自訂指標,驅動 KEDA 提前進行資源配置。模型準確度的提升在 NASA 與 FIFA World Cup 1998 測試集上得到驗證,R² 值均達到 0.9998 以上。在重複的實際部署實驗中,以 k6 模擬 FIFA 高峰流量,本研究方法相較於 HPA,展現了更優異的系統效能:在大幅降低最大副本數(50.0 → 18.0)與擴縮頻率(12.87 → 8.17 次/小時)的同時,也顯著改善了服務品質,請求失敗率降低 81.7%(0.690% → 0.126%),P99 尾端延遲降低 57.7%(280.79 → 119.33 毫秒)。整體而言,本研究證實透過強化的特徵工程能顯著提升預測準確度,並將此準確度成功轉化為實際部署中更低的服務延遲、更高的資源效率與更穩定的系統。;Cloud services frequently face event-driven high-traffic loads, where traditional reactive Kubernetes Horizontal Pod Autoscalers (HPA) lag. This thesis proposes feature-engineering-centric improvements to the existing Prophet-LSTM hybrid model. By focusing on residual learning after trend and seasonality decomposition, we introduce lag, differencing, event-time annotations, and interaction features, significantly strengthening the model’s ability to perceive and fit traffic peaks. Time-series cross-validation showed stable generalization on the NASA dataset but revealed challenges with unseen high-volatility patterns in the FIFA dataset; however, final test set evaluations confirmed the overall effectiveness of the feature engineering. For system integration, we designed and implemented a deployable proactive autoscaling architecture integrating Prometheus, KEDA, and native Kubernetes components. The improved model exports short-horizon RPS forecasts as a custom metric, driving KEDA to provision resources preemptively. The model’s accuracy improvements were validated on the NASA and FIFA World Cup 1998 test sets, with R² values exceeding 0.9998. In repeated deployment experiments simulating FIFA peak traffic with k6, our method demonstrated superior system performance compared to HPA. It achieved significant reductions in service quality metrics, including an 81.7% drop in request failure rate (from 0.690% to 0.126%) and a 57.7% decrease in P99 tail latency (from 280.79 ms to 119.33 ms). This was accomplished while simultaneously reducing the maximum replica count (from 50.0 to 18.0) and scaling frequency (from 12.87 to 8.17 per hour). Overall, this work demonstrates that enhanced feature engineering significantly improves predictive accuracy, translating into lower latency, higher resource efficiency, and greater system stability in practical deployments.
    顯示於類別:[資訊工程學系碩士在職專班 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML13檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明