博碩士論文 110525010 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:34 、訪客IP:3.149.27.153
姓名 龔彥安(Yen-An, Kung,)  查詢紙本館藏   畢業系所 軟體工程研究所
論文名稱 多模態特徵融合的維度心理素質分析虛擬心理師
(Virtual Psychologist for Dimensional Psychological Quality Analysis with Multi-Modal Feature Fusion)
相關論文
★ 從EEG解釋虛擬實境的干擾對注意力的影響★ 使用虛擬教室遊戲的基於融合的深度學習注意缺陷多動障礙評估方法
★ 探討Media-Pipe和Leap Motion藉由用於發展遲緩的VR系統在分層共現網路的對指手勢預測精準度比較★ 基於眼動的閱讀障礙分析與診斷
★ 具多重樹狀結構之可靠性群播傳輸★ 在嵌入式行動裝置上設計與開發跨平台Widget
★ 在 ARM 架構之嵌入式系統上實作輕量化的手持多媒體播放裝置圖形使用者介面函式庫★ 基於網路行動裝置所設計可擴展的服務品質感知GStreamer模組
★ 針對行動網路裝置開發可擴展且跨平台之GSM/HSDPA引擎★ 於單晶片多媒體裝置進行有效率之多格式解碼管理
★ IMS客戶端設計與即時通訊模組研發:個人資訊交換模組與即時訊息模組實作★ 在可攜式多媒體裝置上實作人性化的嵌入式小螢幕網頁瀏覽器
★ 以IMS為基礎之及時語音影像通話引擎的實作:使用開放原始碼程式庫★ 電子書嵌入式開發: 客制化下載服務實作, 資料儲存管理設計
★ 於數位機上盒實現有效率訊框參照處理與多媒體詮釋資料感知的播放器設計★ 具數位安全性的電子書開發:有效率的更新模組與資料庫實作
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-7-1以後開放)
摘要(中) 心理素質是心理健康的內生因素。心理素質的主要方法是基於使用問捲和量
表,這些方法成本高且耗時。近期的研究表示,文本、語音、面部表情特徵、心率與眼睛運動 可適用於心理素質測評。在本文中,我們建立了一套用於移動設備自動心理質量評估的虛擬治療師,可以藉由語音對話主動進行引導用戶且使用情緒偵測方法改變談話內容。在談話過程中,從文本、語音、面部表情、心率和眼睛運動中提取特徵,用於多模態心理質量評估。我們使用兩個融合框架進行自動心理質量分析和機器學習,對不同的維度和因素集進行分類,包括抑鬱、身體、興奮、不穩定、焦慮、家庭護理、獨立、憂鬱、躁狂和焦慮,以及家庭關係。結合 168 名受試者的結果數據,實驗結果表示,使用五種模態特徵的融合框架的總準確率達到了抑鬱、身體、興奮、不穩定、焦慮、家庭照顧、獨立、憂鬱傾向、躁狂傾向、焦慮傾向的最高準確率, 和家庭關係分別達到 68.66%、74.66%、72.06%、93.65%、70.66%、72.66%、93.33%、68.66%、84.43%、70.66%、72%。
摘要(英) Psychological quality plays a crucial role in mental health and is typically assessed through the use of questionnaires and scales, which can be expensive and time-consuming. However, recent research has shown promising alternatives for assessing psychological quality. These include analyzing various sources such as text, audio, facial attributes, heart rate, and eye movement. In this paper, we propose the development of a virtual therapist specifically designed for automatic psychological quality assessment on mobile devices. This virtual therapist would actively engage users in voice dialogue, adapting the conversation content based on emotion perception. Throughout the conversation, we extract features from multiple modalities including text, audio, facial attributes, heart rate, and eye movement, enabling a comprehensive assessment of psychological quality. We utilize two fusion frameworks for automatic psychological quality analysis and machine learning to classify the varying sets of dimensions and factors, which include depression, body, excitement, instability, anxiety, family care, independence, melancholic, manic, and anxiety as well as the family relationship. Based on the data collected from 168 participants, the experimental results demonstrate the effectiveness of our fusion framework utilizing five modal features. The highest accuracy rates were achieved for various psychological factors including depression, body, excitement, instability, anxiety, family care, independence, melancholic tendencies, manic tendencies, anxiety tendencies, and family relationship. Specifically, the fusion framework achieved accuracy rates of 68.66 percent, 74.66 percent, 72.06 percent, 93.65 percent, 70.66 percent, iii 72.66 percent, 93.33 percent, 68.66 percent, 84.43 percent, 70.66 percent, and 72 percent, respectively, for these factors. These findings highlight the robustness and reliability of our approach in accurately assessing and predicting various aspects of psychological well-being.
關鍵字(中) ★ 虛擬人
★ 自動性格識別
★ 多模態融合
關鍵字(英) ★ Virtual Human
★ Automatic Personality Recognition
★ Multi-Modal Fusion
論文目次 目 錄
中文摘要 ..............................................................................................................................................................i
英文摘要 .............................................................................................................................................................ii
誌謝 ....................................................................................................................................................................iv
圖目錄 ................................................................................................................................................................ vi
表目錄 ............................................................................................................................................................... vii
符號說明 .......................................................................................................................................................... viii
一、 緒論........................................................................................................................................................1
二、 相關文獻................................................................................................................................................3
三、 研究內容與方法 ....................................................................................................................................6
3-1 Virtual Therapist ................................................................................................................................6
3-2 Dialogue System ................................................................................................................................6
3-2-1 Uni-modal Emotion Recognition...................................................................................................7
3-2-2 3-Pass Algorithm ...........................................................................................................................8
3-3 Multi-modal Feature Extraction .........................................................................................................9
3-3-1 Text ................................................................................................................................................9
3-3-2 Audio..............................................................................................................................................9
3-3-3 Facial Attributes...........................................................................................................................10
3-3-4 Heart Rate Variability ..................................................................................................................10
3-3-5 Eye Movement.............................................................................................................................10
3-4 Psychological Quality Analysis........................................................................................................12
3-5 Evaluation Metrics ...........................................................................................................................15
四、 實驗結果..............................................................................................................................................17
五、 討論......................................................................................................................................................26
六、 結論與未來展望 ..................................................................................................................................27
參考文獻 ...........................................................................................................................................................28
參考文獻 [1] P. Zou, Y. Wu, and J. Zhang, “Construction and application of psychological quality assessment model for college
students based on extensive data analysis,” Occupational Therapy International, vol. 2022, p. 1–12, 2022.
[2] W.-l. Wang and D.-m. Miao, Research Review of College Students’ Psychological Quality. Distributed by ERIC
Clearinghouse, 2007.
[3] M. Hamilton, “A rating scale for depression,” Journal of Neurology, Neurosurgery amp; Psychiatry, vol. 23, no. 1, p.
56–62, 1960.
[4] J. C. LeBlanc, A. Almudevar, S. J. Brooks, and S. Kutcher, “Screening for adolescent depression: comparison of the
Kutcher Adolescent Depression Scale with the Beck Depression Inventory,” Journal of Child and Adolescent
Psychopharmacology, vol. 12, no. 2, pp. 113–126, 2002.
[5] J. Angst, R. Adolfsson, F. Benazzi, A. Gamma, E. Hantouche, and . Meyer, T. D., “ scott,” J. (2005). The HCL-32:
towards a selfassessment tool for hypomanic symptoms in outpatients. Journal of affective disorders, vol. 88, no. 2, pp.
217–233, 2005.
[6] M. A. X. Hamilton, The assessment of anxiety states by rating. British journal of medical psychology, 1959.
[7] B. Birmaher, S. Khetarpal, D. Brent, M. Cully, L. Balach, J. Kaufman, and S. M. Neer, “The screen for child anxiety
related emotional disorders (SCARED): Scale construction and psychometric characteristics,” Journal of the American
Academy of Child Adolescent Psychiatry, vol. 36, no. 4, pp. 545–553, 1997.
[8] R. H. Moos, Family environment scale manual: Development, applications. research, 1994.
[9] X. Wang, X. Wang, H. Ma et al., “Rating scales for mental health,” Chinese Mental Health Journal Press, vol. 13.
[10] G. M. Lucas, J. Gratch, A. King, and L. P., “Morency, “It’s only a computer: Virtual humans increase willingness to
disclose, ” Computers in Human Behavior,” vol. 37, p. 94–100, 2014.
[11] M. D. Pickard, C. A. Roster, and Y. Chen, ““revealing sensitive information in personal interviews: Is
self-disclosure easier with humans or avatars and under what conditions?” Computers in Human Behavior,” vol. 65, p.
23–30, 2016.
[12] R. S. Camati and F. Enembreck, (2020. Text-Based Automatic Personality Recognition: October), 2020.
[13] Z. T. Liu, A. Rehman, M. Wu, W. H. Cao, and M. Hao, “Speech personality recognition based on annotation
classification using loglikelihood distance and extraction of essential audio features,” IEEE Transactions on Multimedia,
vol. 23, pp. 3414–3426, 2020.
[14] S. Song, S. Jaiswal, E. Sanchez, G. Tzimiropoulos, L. Shen, and M. Valstar, Self-supervised learning of
person-specific facial dynamics for automatic personality recognition. IEEE Transactions on Affective Computing, 2021.
[15] R. Subramanian, J. Wache, M. K. Abadi, R. L. Vieriu, S. Winkler, and N. Sebe, “Ascertain: Emotion and
personality recognition using commercial sensors,” IEEE Transactions on Affective Computing, vol. 9, no. 2, pp.
147–160, 2016.
[16] H. Y. Suen, K. E. Hung, and C. L. Lin, “Tensorflow-based automatic personality recognition used in asynchronous
video interviews,” IEEE Access, vol. 7, pp. 61 018–61 023, 2019.
[17] V. Moscato, A. Picariello, and G. Sperli, “An emotional recommender system for music,” IEEE Intelligent Systems,
vol. 36, no. 5, pp. 57–68, 2020.
[18] S. O. Lilienfeld, A. L. Watts, B. Murphy, T. H. Costello, S. M. Bowes, S. F. Smith, and K. Tabb, “Personality
disorders as emergent interpersonal syndromes: Psychopathic personality as a case example,” Journal of Personality Disorders, vol. 33, no. 5, pp. 577–622, 2019.
[19] D. A. Parry, B. I. Davidson, C. J. Sewall, J. T. Fisher, H. Mieczkowski, and D. S. Quintana, “A systematic review
and meta-analysis of discrepancies between logged and self-reported digital media use,” Nature Human Behaviour, vol.
5, no. 11, pp. 1535–1547, 2021.
[20] K. Borner, A. Bueckle, and M. Ginda, “Data visualization literacy: ¨ Definitions, conceptual frameworks, exercises,
and assessments,” Proceedings of the National Academy of Sciences, vol. 116, no. 6, 1857.
[21] K. Charmaz and R. Thornberg, “The pursuit of quality in grounded theory,” Qualitative research in psychology, vol.
18, no. 3, pp. 305–327, 2021.
[22] M. A. Brackett, C. S. Bailey, J. D. Hoffmann, and D. N. Simmons, “Ruler: A theory-driven, systemic approach to
social, emotional, and academic learning,” Educational Psychologist, vol. 54, no. 3, pp. 144– 161, 2019.
[23] S. Jaiswal, M. Valstar, K. Kusumam, and C. Greenhalgh, July). Virtual human questionnaire for analysis of
depression, anxiety and personality, 2019.
[24] R. D. P. Principi, C. Palmero, J. C. J. Junior, and S. Escalera, “On the effect of observed subject biases in apparent
personality analysis from audio-visual signals,” IEEE Transactions on Affective Computing, vol. 12, no. 3, pp. 607–621,
2019.
[25] C. Suman, S. Saha, A. Gupta, S. K. Pandey, and P. Bhattacharyya, “A multi-modal personality prediction system,”
Knowledge-Based Systems, vol. 236, p. 107715.
[26] S. Peng and K. Nagao, “Recognition of students’ mental states in discussion based on multimodal data and its
application to educational support,” IEEE Access, vol. 9, pp. 18 235–18 250, 2021.
[27] H. Tian, C. Gao, X. Xiao, H. Liu, B. He, H. Wu, H. Wang, and F. Wu, ““skep: Sentiment knowledge enhanced
pre-training for sentiment analysis,” ” arXiv preprint arXiv:2005, p. 05635, 2020.
[28] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks. ” in International
conference on machine learning, 2019.
[29] A. Zadeh and P. Pu, “Multimodal language analysis in the wild: Cmumosei dataset and interpretable dynamic fusion
graph, ” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers),
2018.
[30] C. Busso, M. Bulut, and C. C., “Lee,” A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan,
“Iemocap: Interactive emotional dyadic motion capture database, ” Language resources and evaluation, vol. 42, no, vol.
42, no. 4, p. 335–359, 2008.
[31] O. Wiles, A. Koepke, and A. Zisserman, ““self-supervised learning of a facial attribute embedding from video,” ”
arXiv preprint arXiv:1808, p. 06882, 2018.
[32] A. Nagrani, J. S. Chung, and A. Zisserman, ““voxceleb: a large-scale speaker identification dataset,” ” arXiv
preprint arXiv:1706, p. 08612, 2017.
[33] J. S. Chung, A. Nagrani, and A. Zisserman, ““voxceleb2: Deep speaker recognition,” ” arXiv preprint arXiv:1806, p.
05622, 2018.
[34] G. Boccignone, D. Conte, V. Cuculo, A. D’Amelio, G. Grossi, and R. Lanzarotti, “An open framework for
remote-PPG methods and their assessment, ” IEEE Access, 2020.
[35] M. Li, L. Cao, Q. Zhai, P. Li, S. Liu, R. Li, L. Feng, G. Wang, B. Hu, and S. Lu, ““method of depression
classification based on behavioral and physiological signals of eye movement, ” Complexity,” vol. 2020, p. 4174857, Jan.
2020.
[36] S. Na, L. Xumin, and G. Yong, “Research on k-means clustering algorithm: An improved k-means clustering
algorithm,” in 2010 Third International Symposium on Intelligent Information Technology and Security Informatics,
2010, pp. 63–67.
[37] M. Cui, “Introduction to the k-means clustering algorithm based on the elbow method,” vol. 3, pp. 9–16, 2020
指導教授 吳曉光(Wu, Eric Hsiao-Kuang) 審核日期 2023-7-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明