中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/91812
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42797420      線上人數 : 857
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/91812


    題名: 時間序列預測中變換器架構之位置編碼設計;Design of Transformer Architecture based on different Position Encoding in Time series forecasting
    作者: 游景翔;You, Ching-Siang
    貢獻者: 工業管理研究所
    關鍵詞: 資料探勘;深度學習;時間序列;變換器模型;Data mining;Deep learning;Time series;Transformer model
    日期: 2023-07-10
    上傳時間: 2024-09-19 14:14:35 (UTC+8)
    出版者: 國立中央大學
    摘要: 時間序列分析和預測是資料探勘中的一個重要領域。時間序列資料是在統一的時間間隔內收集大量的數據值,例如年、月、週、日等。透過分析時間序列,我們可以預測數據的變化,並提供未來資料的預測。近年來,時間序列預測一直是研究的焦點,並在機器學習和人工智能領域引發了各種研究和發展。隨著數據可用性的增加和計算能力的提升,也出現了許多基於深度學習的模型。且因為不同領域間的多樣性,衍伸許多不同的深度學習模型。時間序列趨勢預測一直是一個重要的課題,其結果可以為各領域的應用提供基礎,例如生產計劃的控制和優化等。
    變換器模型(Transformer Model)最初是為了處理自然語言而提出的一種神經網絡架構。它使用一種稱為注意力或自我注意力(Self-attention)的機制來檢測序列中元素之間的相互影響和相互依賴關係。在本研究中,我們將變換器模型應用於時間序列資料的預測,並探討其並行計算的特性是否能解決長短期記憶模型(LSTM)在學習長序列時的限制。
    此外,我們通過使用不同的位置編碼機制(Positional Encoding)來提供時間序列資料在序列中的位置信息,並探討不同的位置編碼方式對於時間序列預測在變換器模型中的影響。我們在實驗中使用了五種真實的時間序列資料,評估各種模型對於不同時間趨勢的預測結果。

    ;Time series analysis and forecasting are essential components of data mining. Time series data refers to a collection of data values gathered at regular time intervals, such as yearly, monthly, weekly, or daily intervals. By analyzing time series data, we can predict the changes occurring within the dataset and forecast future data trends. Time series prediction has been a research hotspot in the past ten years, with the increase in data availability and the improvement of computing power, many deep learning-based models have also emerged in recent years, and many different deep learning model designs have also emerged considering the diversity of time series problems between other domains. Time series trend forecasting has always been an important topic. The predicted results can provide the basis for applications in various fields, such as the control and optimization of production planning.
    Transformer Model is a kind of neural network, which was mainly applied to natural language processing when it was first proposed. It primarily uses a set of mechanisms called attention or self-attention to detect data elements in sequences that influence and depend on each other. In this study, we use the Transformer model to predict time series data and explore whether its parallel operation characteristics can solve the long-short-term memory model (LSTM) with a certain length limit in sequence learning. In addition, we use different Positional Encoding mechanisms to give time series data position information in the sequence and discuss the impact of different position encoding methods to express the positional relationship of time series data at time points on the time series prediction of the transformer model. In Chapter 4, we used 5 kinds of real-world time series data to examine each model′s results to predict different time trends.
    顯示於類別:[工業管理研究所 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML21檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明