English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41145164      線上人數 : 657
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89876


    題名: Mobile Virtual Therapist for Multi-Modal Depression-Level Assessment
    作者: 高廷瑜;Gao, Ting-Yu
    貢獻者: 資訊工程學系
    關鍵詞: 虛擬人;憂鬱識別;多模態數據融合;virtual human;depression recognition;multi-modal fusion
    日期: 2022-08-03
    上傳時間: 2022-10-04 12:03:16 (UTC+8)
    出版者: 國立中央大學
    摘要: 憂鬱症不僅困擾著數億人口,而且還加重了全球殘疾和醫療保健負擔。診斷憂鬱症的主要方法依賴於醫療專業人員在與患者的臨床訪談中的判斷,這是主觀且又耗時的。最近的研究表明,文本、聲音、臉部特徵、心率和眼球運動可用於憂鬱症評估。在本研究中,我們構建了一個虛擬治療師,用於在行動裝置上進行自動化憂鬱症評估,可以通過語音對話主動引導使用者,並透過情緒感知技術變換對話內容。在對話過程中,將文本、聲音、臉部屬性、心率和眼球運動中提取的特徵,使用於多模態憂鬱程度評估。我們利用特徵級融合框架整合五種模態和深度神經網絡,對不同程度的憂鬱症進行分類,包括健康、輕度、中度或重度憂鬱症,以及雙相情感障礙(或稱為躁狂憂鬱症)。經過來自168名受試者的實驗結果證明,具有五個模態特徵的特徵級融合架構之總體準確率達到最高的90.26%。;Depression not only afflicts hundreds of millions of people but also contributes to a global disability and healthcare burden. The primary method of diagnosing depression relies on the judgment of medical professionals in clinical interviews with patients, which is subjective and time-consuming. Recent studies have demonstrated that text, audio, facial attributes, heart rate, and eye movement could be utilized for depression assessment. In this paper, we construct a virtual therapist for automatic depression assessment on mobile devices that can actively guide users through voice dialogue and change conversation content using emotion perception. During the conversation, features from text, audio, facial attributes, heart rate, and eye movement are extracted for multi-modal depression-level assessment. We utilize a feature-level fusion framework to integrate five modalities and the deep neural network to classify the varying levels of depression, which include healthy, mild, moderate, or severe depression, as well as bipolar disorder (formerly called manic depression). With outcome data from 168 subjects, experimental results reveal that the total accuracy of feature-level fusion with five modal features achieves the highest accuracy of 90.26 percent.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML36檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明