摘要: | 在當今時代,腦機介面(Brain-Computer Interface, BCI)的研究正蓬勃發展,特別是在醫療康復、智能設備控制和娛樂等領域中展現出巨大的應用潛力。BCI的其中一個重要應用方向是想像運動(Motor Imagery, MI),透過與虛擬實境(Virtual Reality, VR)的互動中,同時動作觀察(Action Observation, AO)也被認為是想像運動訓練中重要的訓練策略。因此本篇論文結合了VR與Unity,設計了純MI與AO+MI兩種場景供受試者遊玩,並分析每位受試者在實驗過程中的拓樸圖變化。每位受試者需要在實驗中進行左手、右手運動想像以及休息狀態時,提供其腦電圖(EEG)訊號數據。本篇論文採用了Transformer架構結合EEGNet來建構模型。利用EEGNet的不同捲積層來獲取EEG訊號在時域與頻域的特徵,而Transformer因其自注意力機制(Self-Attention)對時序數據的強大適應性,在去除雜訊和提取關鍵特徵方面表現優異。Transformer能夠有效捕捉EEG訊號中的時序依賴性,從而提取出代表受試者想像運動的顯著特徵,進而進行三分類的判斷,即分辨出左手想像運動、右手想像運動以及休息狀態。同時實驗過程分為線下訓練與線上訓練,使模型可以持續的學習達到更高的正確率,在線下訓練中模型的準確率平均為74.88%,比單純使用EEGNet增加了6.13%,而在線上訓練階段受試者的遊戲表現進步了18.95%,證實了線下訓練與線上訓練的策略有助於受試者進行想像運動。;In contemporary times, Brain-Computer Interface (BCI) research is flourishing, especially showcasing immense application potential in fields such as medical rehabilitation, intelligent device control, and entertainment. One significant application direction of BCI is Motor Imagery (MI). Through interaction with Virtual Reality (VR), Action Observation (AO) is also considered an important training strategy in Motor Imagery training. This paper integrates VR with Unity to design two scenarios for participants to engage in: pure MI and AO+MI. It also analyzes the topological map changes of each participant during the experimental process. Each participant needs to provide their electroencephalogram (EEG) signal data while performing left-hand, right-hand motor imagery, and resting states during the experiment.This paper adopts a model constructed by combining the Transformer architecture with EEGNet. EEGNet′s various convolutional layers are utilized to capture the features of EEG signals in both the time and frequency domains. Meanwhile, the Transformer, with its self-attention mechanism, exhibits excellent performance in noise removal and key feature extraction due to its strong adaptability to sequential data. The Transformer can effectively capture the temporal dependencies in EEG signals, thereby extracting significant features representing the participants′ motor imagery, leading to the classification of three states: left-hand motor imagery, right-hand motor imagery, and resting state.Additionally, the experimental process is divided into offline and online training, allowing the model to continuously learn and achieve higher accuracy. The average accuracy of the model in offline training is 74.88%, which is an improvement of 6.13% compared to using only EEGNet. During the online training phase, the participants′ game performance improved by 18.95%, demonstrating that the strategies of offline and online training are beneficial for participants in performing motor imagery. |