參考文獻 |
[1] P. Zou, Y. Wu, and J. Zhang, “Construction and application of psychological quality assessment model for college
students based on extensive data analysis,” Occupational Therapy International, vol. 2022, p. 1–12, 2022.
[2] W.-l. Wang and D.-m. Miao, Research Review of College Students’ Psychological Quality. Distributed by ERIC
Clearinghouse, 2007.
[3] M. Hamilton, “A rating scale for depression,” Journal of Neurology, Neurosurgery amp; Psychiatry, vol. 23, no. 1, p.
56–62, 1960.
[4] J. C. LeBlanc, A. Almudevar, S. J. Brooks, and S. Kutcher, “Screening for adolescent depression: comparison of the
Kutcher Adolescent Depression Scale with the Beck Depression Inventory,” Journal of Child and Adolescent
Psychopharmacology, vol. 12, no. 2, pp. 113–126, 2002.
[5] J. Angst, R. Adolfsson, F. Benazzi, A. Gamma, E. Hantouche, and . Meyer, T. D., “ scott,” J. (2005). The HCL-32:
towards a selfassessment tool for hypomanic symptoms in outpatients. Journal of affective disorders, vol. 88, no. 2, pp.
217–233, 2005.
[6] M. A. X. Hamilton, The assessment of anxiety states by rating. British journal of medical psychology, 1959.
[7] B. Birmaher, S. Khetarpal, D. Brent, M. Cully, L. Balach, J. Kaufman, and S. M. Neer, “The screen for child anxiety
related emotional disorders (SCARED): Scale construction and psychometric characteristics,” Journal of the American
Academy of Child Adolescent Psychiatry, vol. 36, no. 4, pp. 545–553, 1997.
[8] R. H. Moos, Family environment scale manual: Development, applications. research, 1994.
[9] X. Wang, X. Wang, H. Ma et al., “Rating scales for mental health,” Chinese Mental Health Journal Press, vol. 13.
[10] G. M. Lucas, J. Gratch, A. King, and L. P., “Morency, “It’s only a computer: Virtual humans increase willingness to
disclose, ” Computers in Human Behavior,” vol. 37, p. 94–100, 2014.
[11] M. D. Pickard, C. A. Roster, and Y. Chen, ““revealing sensitive information in personal interviews: Is
self-disclosure easier with humans or avatars and under what conditions?” Computers in Human Behavior,” vol. 65, p.
23–30, 2016.
[12] R. S. Camati and F. Enembreck, (2020. Text-Based Automatic Personality Recognition: October), 2020.
[13] Z. T. Liu, A. Rehman, M. Wu, W. H. Cao, and M. Hao, “Speech personality recognition based on annotation
classification using loglikelihood distance and extraction of essential audio features,” IEEE Transactions on Multimedia,
vol. 23, pp. 3414–3426, 2020.
[14] S. Song, S. Jaiswal, E. Sanchez, G. Tzimiropoulos, L. Shen, and M. Valstar, Self-supervised learning of
person-specific facial dynamics for automatic personality recognition. IEEE Transactions on Affective Computing, 2021.
[15] R. Subramanian, J. Wache, M. K. Abadi, R. L. Vieriu, S. Winkler, and N. Sebe, “Ascertain: Emotion and
personality recognition using commercial sensors,” IEEE Transactions on Affective Computing, vol. 9, no. 2, pp.
147–160, 2016.
[16] H. Y. Suen, K. E. Hung, and C. L. Lin, “Tensorflow-based automatic personality recognition used in asynchronous
video interviews,” IEEE Access, vol. 7, pp. 61 018–61 023, 2019.
[17] V. Moscato, A. Picariello, and G. Sperli, “An emotional recommender system for music,” IEEE Intelligent Systems,
vol. 36, no. 5, pp. 57–68, 2020.
[18] S. O. Lilienfeld, A. L. Watts, B. Murphy, T. H. Costello, S. M. Bowes, S. F. Smith, and K. Tabb, “Personality
disorders as emergent interpersonal syndromes: Psychopathic personality as a case example,” Journal of Personality Disorders, vol. 33, no. 5, pp. 577–622, 2019.
[19] D. A. Parry, B. I. Davidson, C. J. Sewall, J. T. Fisher, H. Mieczkowski, and D. S. Quintana, “A systematic review
and meta-analysis of discrepancies between logged and self-reported digital media use,” Nature Human Behaviour, vol.
5, no. 11, pp. 1535–1547, 2021.
[20] K. Borner, A. Bueckle, and M. Ginda, “Data visualization literacy: ¨ Definitions, conceptual frameworks, exercises,
and assessments,” Proceedings of the National Academy of Sciences, vol. 116, no. 6, 1857.
[21] K. Charmaz and R. Thornberg, “The pursuit of quality in grounded theory,” Qualitative research in psychology, vol.
18, no. 3, pp. 305–327, 2021.
[22] M. A. Brackett, C. S. Bailey, J. D. Hoffmann, and D. N. Simmons, “Ruler: A theory-driven, systemic approach to
social, emotional, and academic learning,” Educational Psychologist, vol. 54, no. 3, pp. 144– 161, 2019.
[23] S. Jaiswal, M. Valstar, K. Kusumam, and C. Greenhalgh, July). Virtual human questionnaire for analysis of
depression, anxiety and personality, 2019.
[24] R. D. P. Principi, C. Palmero, J. C. J. Junior, and S. Escalera, “On the effect of observed subject biases in apparent
personality analysis from audio-visual signals,” IEEE Transactions on Affective Computing, vol. 12, no. 3, pp. 607–621,
2019.
[25] C. Suman, S. Saha, A. Gupta, S. K. Pandey, and P. Bhattacharyya, “A multi-modal personality prediction system,”
Knowledge-Based Systems, vol. 236, p. 107715.
[26] S. Peng and K. Nagao, “Recognition of students’ mental states in discussion based on multimodal data and its
application to educational support,” IEEE Access, vol. 9, pp. 18 235–18 250, 2021.
[27] H. Tian, C. Gao, X. Xiao, H. Liu, B. He, H. Wu, H. Wang, and F. Wu, ““skep: Sentiment knowledge enhanced
pre-training for sentiment analysis,” ” arXiv preprint arXiv:2005, p. 05635, 2020.
[28] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks. ” in International
conference on machine learning, 2019.
[29] A. Zadeh and P. Pu, “Multimodal language analysis in the wild: Cmumosei dataset and interpretable dynamic fusion
graph, ” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers),
2018.
[30] C. Busso, M. Bulut, and C. C., “Lee,” A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan,
“Iemocap: Interactive emotional dyadic motion capture database, ” Language resources and evaluation, vol. 42, no, vol.
42, no. 4, p. 335–359, 2008.
[31] O. Wiles, A. Koepke, and A. Zisserman, ““self-supervised learning of a facial attribute embedding from video,” ”
arXiv preprint arXiv:1808, p. 06882, 2018.
[32] A. Nagrani, J. S. Chung, and A. Zisserman, ““voxceleb: a large-scale speaker identification dataset,” ” arXiv
preprint arXiv:1706, p. 08612, 2017.
[33] J. S. Chung, A. Nagrani, and A. Zisserman, ““voxceleb2: Deep speaker recognition,” ” arXiv preprint arXiv:1806, p.
05622, 2018.
[34] G. Boccignone, D. Conte, V. Cuculo, A. D’Amelio, G. Grossi, and R. Lanzarotti, “An open framework for
remote-PPG methods and their assessment, ” IEEE Access, 2020.
[35] M. Li, L. Cao, Q. Zhai, P. Li, S. Liu, R. Li, L. Feng, G. Wang, B. Hu, and S. Lu, ““method of depression
classification based on behavioral and physiological signals of eye movement, ” Complexity,” vol. 2020, p. 4174857, Jan.
2020.
[36] S. Na, L. Xumin, and G. Yong, “Research on k-means clustering algorithm: An improved k-means clustering
algorithm,” in 2010 Third International Symposium on Intelligent Information Technology and Security Informatics,
2010, pp. 63–67.
[37] M. Cui, “Introduction to the k-means clustering algorithm based on the elbow method,” vol. 3, pp. 9–16, 2020 |