English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41250669      線上人數 : 412
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/88146


    題名: 用深度學習理論做時域、頻域相關的腦電特徵多類別的分類;Time- and frequency- related EEG features for multi-category classification using deep learning method
    作者: 王威;Wang, Wei
    貢獻者: 生物醫學工程研究所
    關鍵詞: 腦電圖;深度學習;T-檢定;多分類;Electroencephalography;Deep Learning;t-test;multi-class
    日期: 2022-01-20
    上傳時間: 2022-07-13 18:12:32 (UTC+8)
    出版者: 國立中央大學
    摘要: 深度學習是機器學習中一種基於對資料進行表徵學習的演算法。
    根據過去的研究,要在實驗中,找到不同任務下不同腦電訊號的特徵,並將它們做出提取分類的動作一直是一個挑戰。利用時域特徵和頻域特徵分不同的類別(2-4類),準確率也是在60%-80%之間。
    本研究的目的在於運用深度學習技術,對EEG進行多類別分類,建立一個高分類準確率的深度學習模型。本研究通過對受試者進行視覺刺激,對他們腦電訊號的研究,分析了14個channels的時域和頻域訊號、抓取他們腦袋中對於“前、後、左、右、停”五個方位的特徵,並通過這些特徵,進行多類別的分類。目前我們做了3個人,每人7次,共計21次實驗,每次實驗會有20筆資料每筆資料有5個類別,收集資料的取樣率為512Hz,現已經取得了初步的成果,我們在對14個channels進行了去除EOG雜訊以後,運用Keras深度學習框架,採用監督學習(supervised learning)的方式,分析了它們在不同類別時域方面的最大值(及其latency)、最小值(及其latency)、平均值資料,用k-fold驗證來驗證模型成效,得到了81.00%的準確度;用同樣的方式分析了它們頻域方面的特徵資料,選取1-58HZ的波段,將它們分成Delta (1-3), Theta (4-7), Alpha (8-14), Beta (13-28), Gamma (29-58)五個頻段,得到了78.10%的準確率。結合時域和頻域的特徵,得到了83.40%的準確率。運用T檢定,進行特徵的篩選比對,並使用CNN和MLP兩種模型進行訓練驗證後,我們發現在MLP模型下,用時域+頻域的特徵,篩選一個以上顯著特徵的分類平均準確率最高,可達87.41%。
    ;Deep learning is an algorithm in machine learning that is based on representational learning of data. The basis of deep learning is the decentralized representation in machine learning.
    According to past studies, it has been a challenge to find the features of different brain signals under different tasks and make them for extraction and classification in the experiments. The accuracy of using time domain features and frequency domain features in different categories (2-4 categories) is about 60%-80%.
    The purpose of this study is to develop a multi-output brainwave signal extraction model using deep learning techniques, that is, to extend the BCI to multiple categories (5 types in total) of outputs. The time and frequency domain signals of the 14 channels were analyzed to capture the features of "front, back, left, right and stop" in their brains, and these features were used to classify them into multiple categories. At present, we have done 3 people, 7 times each, total 21 experiments, each experiment will have 20 data each data has 5 categories, the sampling rate of the collected data is 512Hz, now we have achieved the initial results, we have done 14 channels after removing the EOG noise, using the Keras deep learning framework, using supervised learning (supervised learning) We analyzed the maximum (and its latency), minimum (and its latency), and average data in different types of time domain using Keras deep learning framework, and verified the effectiveness of the model with k-fold validation, and obtained 81.00% accuracy; we analyzed the feature data in frequency domain in the same way, and selected the band of 1-58Hz, and divided them into Delta (1-58Hz), Delta (1-58Hz), and Delta (1-58Hz). In the same way, we analyzed their frequency characteristics, and divided them into five frequency bands: Delta (1-3), Theta (4-7), Alpha (8-14), Beta (13-28), Gamma (29-58), and obtained 78.10% accuracy. Combining the features in the time and frequency domains, an accuracy of 83.40% was obtained. After the t-checking, feature filtering, and training with both CNN and MLP models, we found that the highest average accuracy of 87.41% was achieved by filtering more than one significant feature with time domain+frequency domain features under the MLP model.
    顯示於類別:[生物醫學工程研究所 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML64檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明