摘要: | 想像運動(Motor imagery,MI)是腦機介面(Brain-computer interface,BCI)中常見的控制方式,此領域現已有成熟的研究體系與許多落地的實例。然而基於MI-BCI的系統仍存在幾個挑戰需要解決,例如,受試者需要接受長時間的訓練才能使用系統,大幅增加時間成本,且腦電訊號具有高變異度與非平穩的特性,訊號會因時間和受試者的不同而產生差異。因此本研究提出結合虛擬實境(Virtual reality,VR)的想像運動訓練系統,共四類想像運動(左手、右手、雙腳、休息),系統建立分成線下資料蒐集與即時回饋兩階段。線下階段中,受試者在VR環境內通過觀察虛擬人物動作的方式來輔助MI執行,利用線下資料訓練深度學習網路,作為後續即時回饋MI分類的基礎。受試者將在即時回饋階段以想像運動實時控制虛擬人物在元宇宙中行走,我們引入持續學習(Continual learning)概念,使用回饋資料微調(Fine-tuning)模型參數,持續提升模型性能。實驗共五位受試者參與,結果顯示線下模型的平均準確率達52.8%比相關研究模型(S3T,EEGNet,DeepConvNet以及ShallowConvNet)的性能表現好,即時回饋模型也從47.4%的平均準確度提升到66.2%,進步幅度達18.8%。我們也透過ERD/ERS來分析線下階段與回饋階段資料,結果表明動作觀察有助於MI的執行,模型在即時回饋期間抓取的資料也具合理以及可解釋性。本研究提出之系統在未來可望作為新的MI-BCI訓練方向。;Motor imagery (MI) is a common control method in brain-computer interface (BCI). There is already a mature research system and many practical examples in this field. However, there are still several challenges to be solved in the system based on MI-BCI. For example, the subjects need to receive long-term training to use the system, which greatly increases the time cost, and the EEG signals have high variability and non-stationary characteristics. Will vary with time and subject. Therefore, this study proposes an imaginary exercise training system combined with virtual reality (VR), with a total of four classes of motor imagery (left hand, right hand, both feet, and rest), and the establishment of the system is divided into two stages: offline data collection and real-time feedback. In the offline stage, the subjects observe the actions of virtual characters in the VR environment to assist MI execution. We use the offline data to train the deep learning network as the basis for the subsequent real-time feedback of MI classification. In the real-time feedback stage, the subjects will use MI to control the virtual character to walk in the metaverse in real time, We introduce the concept of continuous learning and use feedback data to fine-tune model parameters to continuously improve model performance. A total of five subjects participated in the experiment, and the results showed that the average accuracy of the offline model reached 52.8%, better than related works (S3T, EEGNet, DeepConvNet and ShallowConvNet), and the average accuracy of the real-time feedback model increased from 47.4% to 66.2%, an improvement of 18.8%. We also use ERD/ERS to analyze offline and feedback data, and the results show that action observation is helpful for the execution of MI, and the data captured by the model during real-time feedback stage is also reasonable and interpretable. The system proposed in this study is expected to serve as a new MI-BCI training direction in the future. |