English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78852/78852 (100%)
造訪人次 : 38467805      線上人數 : 2283
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93562


    題名: 發展深度學習為基礎之即時腦波人機介面於元宇宙環境下的應用;Development and application of a real-time brain-computer interface based on deep learning in the metaverse environment
    作者: 曹程富;Cao, Cheng-Fu
    貢獻者: 電機工程學系
    關鍵詞: 腦機介面;想像運動;虛擬實境;動作觀察;深度學習;持續學習;Brain-computer interface;Motor imagery;Virtual reality;Action observation;Deep learning;Continual learning
    日期: 2023-10-02
    上傳時間: 2024-03-05 17:51:22 (UTC+8)
    出版者: 國立中央大學
    摘要: 想像運動(Motor imagery,MI)是腦機介面(Brain-computer interface,BCI)中常見的控制方式,此領域現已有成熟的研究體系與許多落地的實例。然而基於MI-BCI的系統仍存在幾個挑戰需要解決,例如,受試者需要接受長時間的訓練才能使用系統,大幅增加時間成本,且腦電訊號具有高變異度與非平穩的特性,訊號會因時間和受試者的不同而產生差異。因此本研究提出結合虛擬實境(Virtual reality,VR)的想像運動訓練系統,共四類想像運動(左手、右手、雙腳、休息),系統建立分成線下資料蒐集與即時回饋兩階段。線下階段中,受試者在VR環境內通過觀察虛擬人物動作的方式來輔助MI執行,利用線下資料訓練深度學習網路,作為後續即時回饋MI分類的基礎。受試者將在即時回饋階段以想像運動實時控制虛擬人物在元宇宙中行走,我們引入持續學習(Continual learning)概念,使用回饋資料微調(Fine-tuning)模型參數,持續提升模型性能。實驗共五位受試者參與,結果顯示線下模型的平均準確率達52.8%比相關研究模型(S3T,EEGNet,DeepConvNet以及ShallowConvNet)的性能表現好,即時回饋模型也從47.4%的平均準確度提升到66.2%,進步幅度達18.8%。我們也透過ERD/ERS來分析線下階段與回饋階段資料,結果表明動作觀察有助於MI的執行,模型在即時回饋期間抓取的資料也具合理以及可解釋性。本研究提出之系統在未來可望作為新的MI-BCI訓練方向。;Motor imagery (MI) is a common control method in brain-computer interface (BCI). There is already a mature research system and many practical examples in this field. However, there are still several challenges to be solved in the system based on MI-BCI. For example, the subjects need to receive long-term training to use the system, which greatly increases the time cost, and the EEG signals have high variability and non-stationary characteristics. Will vary with time and subject. Therefore, this study proposes an imaginary exercise training system combined with virtual reality (VR), with a total of four classes of motor imagery (left hand, right hand, both feet, and rest), and the establishment of the system is divided into two stages: offline data collection and real-time feedback. In the offline stage, the subjects observe the actions of virtual characters in the VR environment to assist MI execution. We use the offline data to train the deep learning network as the basis for the subsequent real-time feedback of MI classification. In the real-time feedback stage, the subjects will use MI to control the virtual character to walk in the metaverse in real time, We introduce the concept of continuous learning and use feedback data to fine-tune model parameters to continuously improve model performance. A total of five subjects participated in the experiment, and the results showed that the average accuracy of the offline model reached 52.8%, better than related works (S3T, EEGNet, DeepConvNet and ShallowConvNet), and the average accuracy of the real-time feedback model increased from 47.4% to 66.2%, an improvement of 18.8%. We also use ERD/ERS to analyze offline and feedback data, and the results show that action observation is helpful for the execution of MI, and the data captured by the model during real-time feedback stage is also reasonable and interpretable. The system proposed in this study is expected to serve as a new MI-BCI training direction in the future.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML30檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明