English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41263091      線上人數 : 825
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95752


    題名: 探討動作觀察對於腦波人機介面於深度學習在元宇宙下的即時應用;Exploring the Real-time Applications of Action Observation in Brain-Computer Interface with Deep Learning in the Metaverse
    作者: 袁嘉浚;YUAN, JIA-JUN
    貢獻者: 電機工程學系
    關鍵詞: 腦機介面;想像運動;深度學習;持續學習;動作觀察;Transformer架構;Brain-Computer Interface;Motor Imagery;Deep Learning;Continual Learning;Action Observation;Transformer
    日期: 2024-07-26
    上傳時間: 2024-10-09 17:14:53 (UTC+8)
    出版者: 國立中央大學
    摘要: 在當今時代,腦機介面(Brain-Computer Interface, BCI)的研究正蓬勃發展,特別是在醫療康復、智能設備控制和娛樂等領域中展現出巨大的應用潛力。BCI的其中一個重要應用方向是想像運動(Motor Imagery, MI),透過與虛擬實境(Virtual Reality, VR)的互動中,同時動作觀察(Action Observation, AO)也被認為是想像運動訓練中重要的訓練策略。因此本篇論文結合了VR與Unity,設計了純MI與AO+MI兩種場景供受試者遊玩,並分析每位受試者在實驗過程中的拓樸圖變化。每位受試者需要在實驗中進行左手、右手運動想像以及休息狀態時,提供其腦電圖(EEG)訊號數據。本篇論文採用了Transformer架構結合EEGNet來建構模型。利用EEGNet的不同捲積層來獲取EEG訊號在時域與頻域的特徵,而Transformer因其自注意力機制(Self-Attention)對時序數據的強大適應性,在去除雜訊和提取關鍵特徵方面表現優異。Transformer能夠有效捕捉EEG訊號中的時序依賴性,從而提取出代表受試者想像運動的顯著特徵,進而進行三分類的判斷,即分辨出左手想像運動、右手想像運動以及休息狀態。同時實驗過程分為線下訓練與線上訓練,使模型可以持續的學習達到更高的正確率,在線下訓練中模型的準確率平均為74.88%,比單純使用EEGNet增加了6.13%,而在線上訓練階段受試者的遊戲表現進步了18.95%,證實了線下訓練與線上訓練的策略有助於受試者進行想像運動。;In contemporary times, Brain-Computer Interface (BCI) research is flourishing, especially showcasing immense application potential in fields such as medical rehabilitation, intelligent device control, and entertainment. One significant application direction of BCI is Motor Imagery (MI). Through interaction with Virtual Reality (VR), Action Observation (AO) is also considered an important training strategy in Motor Imagery training. This paper integrates VR with Unity to design two scenarios for participants to engage in: pure MI and AO+MI. It also analyzes the topological map changes of each participant during the experimental process. Each participant needs to provide their electroencephalogram (EEG) signal data while performing left-hand, right-hand motor imagery, and resting states during the experiment.This paper adopts a model constructed by combining the Transformer architecture with EEGNet. EEGNet′s various convolutional layers are utilized to capture the features of EEG signals in both the time and frequency domains. Meanwhile, the Transformer, with its self-attention mechanism, exhibits excellent performance in noise removal and key feature extraction due to its strong adaptability to sequential data. The Transformer can effectively capture the temporal dependencies in EEG signals, thereby extracting significant features representing the participants′ motor imagery, leading to the classification of three states: left-hand motor imagery, right-hand motor imagery, and resting state.Additionally, the experimental process is divided into offline and online training, allowing the model to continuously learn and achieve higher accuracy. The average accuracy of the model in offline training is 74.88%, which is an improvement of 6.13% compared to using only EEGNet. During the online training phase, the participants′ game performance improved by 18.95%, demonstrating that the strategies of offline and online training are beneficial for participants in performing motor imagery.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML16檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明