參考文獻 |
[1] A. Austermann, N. Esau, L. Kleinjohann, and B. Kleinjohann, “Fuzzy emotion recognition in natural speech dialogue,” Robot and Human Interactive Communication, pp. 317-322, Aug. 2005.
[2] R. Banse and K. R. Scherer, “Acoustic profiles in vocal emotion expression,” Personality and Social Psychology, vol. 70, no. 3, pp.614-636, 1996.
[3] M. W. Bhatti, Y. Wang, and L. Guan, “A neural network approach for human emotion recognition in speech,” in Proc. of the International Symposium on Circuits and System, vol.2, pp. II-181-4, May 2004.
[4] F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier, and B. Weiss, “A Database of German Emotional Speech,” in Proc. of the INTERSPEECH, p.1517-1520, 2005.
[5] R. R. Cornelius, “Theoretical approaches to emotion,” in Proc. of the ISCA Workshop on Speech and Emotion, pp. 3-10, 2000.
[6] R. Cowie, E. Douglas-Cowie, B. Apolloni, J.Taypor, A. Romano, and W. Fellenz, “What a neural net needs to know about emotion words,” in Proc. of the 3rd World Multiconference on Circuits, Systems, Comms. and Computers, pp. 109-114, July, 1999.
[7] R. Cristi, Modern digital signal processing, Pacific Grove, CA, USA:Brook/Cole-Thomson Learning, 2004.
[8] F. Dellaert, T. Polzin, and A. Waibel, “Recognizing emotion in speech,” in Proc. ICSLP, pp. 1970-1973, 1996.
[9] R. O. Duda and P.E.Hart, Pattern Classification and Scene Analysis, New York:Wiley, 1973.
[10] F. Fragopanagos and J.G. Taylor, “Emotion recognition in human-computer interaction,” Neural Networks, vol. 18, pp. 389-405, 2005.
[11] H. H. Kyung, E. H. Kim, Y. K. Kwak, “Improvement of emotion recognition by Bayesian classifier using non-zero-pitch concept,” in IEEE International Workshop on Robot and Human Interactive Communication, pp. 312-316, Aug. 2005.
[12] D. Liu, et al., "A Structure Optimization Method Based on Fisher Ratio," in 3rd International Conf. on Natural Computation, vol. 1, pp. 54-58, 2007.
[13] I. R. Murray and J. L. Arnott, “Toward a simulation of emotion in synthetic speech: A review of the literature on human vocal emotion,” J. of the Acoustical Society of America, pp. 1097-1108, 1993.
[14] J. Nicholson, K. Takahashi, and R. Nakatsu, “Emotion recognition in speech using neural networks,” in 6th International Conf. on Neural Information Proc., vol. 2, pp. 495-501, Nov. 1999.
[15] T. L. Nwe, F. S. Wei, and L. C DE Silva, “Speech based emotion classification,” in Proc. of IEEE Region 10 International Conf. on Electrical and Electronic Technology, vol. 1, pp. 297-301, Aug 2001.
[16] A. A. Razak, R. Komiya, and M. I. Z. Abidin, “Comparison between fuzzy and NN method for speech emotion recognition,” in 3rd International Conf. on Information Technology and Applications, vol. 1, pp. 297-302, July 2005.
[17] J. Rong, Y. P. Chen, M. Chowdhury, and G. Li, “Acoustic features extraction for emotion recognition,” in 6th IEEE/ACIS International Conf. on Computer and Information Science, pp. 419-424, 2007.
[18] J. Sato and S. Morishima, “Emotion modeling in speech production using emotion space”, in 5th IEEE International Workshop on Robot and Human Communication, pp. 472-477, Nov. 1996.
[19] K. R. Scherer, “Vocal communication of emotion: A review of research paradigms,” Speech Communication, vol. 40, pp. 227-256, 2003.
[20] B. Schuller, M. Lang, and G. Rigoll, “Automatic emotion recognition by the speech signal,” in 6th World Multiconference on Systemics, Cybermetics and Informatics, pp. 367-372, 2002.
[21] B. Schuller, M. Lang, and G. Rigoll, “Hidden Markov model-based speech emotion recognition,” in Proc. of the IEEE ICASSP Conf., pp. 1-4, April 2003.
[22] B. Schuller, M. Lang, and G. Rigoll, “Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture,” in IEEE International Conf. on Acoustics, Speech, and Signal Proc., vol. 1, pp. I-577-80, May 2004.
[23] M. M. Sondhi, “New methods of pitch extraction,” IEEE Trans. on Audio and Electroacoust, vol. 16, pp. 262-266, 1968.
[24] M. C. Su, C. -W. Liu, and S. -S. Tsay, “Neural-network-based Fuzzy Model and its Application to Transient Stability Prediction in Power Systems,” IEEE Trans. Systems, Man, and Cybernetics, vol. 29, no. 1, pp. 149-157, 1999.
[25] M. C. Su, "Use of Neural Networks as Medical Diagnosis Expert Systems," Computers in Biology and Medicine, Vol. 24, No. 6, pp. 419-429, 1994.
[26] K. Truong and D. Van Leeuwen, “An ‘open-set’ detection evaluation methodology for automatic emotion recognition in speech,” in Workshop on Paralinguistic Speech – between models and data, pp. 5-10, 2007
[27] D. Ververidis and C. Kotropoulos, “A State of the Art Review on Emotional Speech Databases,” in Proc. 1st Richmedia Conf., pp. 109-119, Oct. 2003.
[28] D. Ververidis and C. Kotropoulos, “Emotional speech recognition: Resources, features, and methods,” Speech Communication, vol. 48, no. 9, pp. 1162-1181, Jan. 2006.
[29] T. Vogt and E. André, “Improving automatic emotion recognition from speech via gender differentiation,” in Proc. Language Resources and Evaluation Conference, 2006.
[30] C. M. Whissell, “The dictionary of affect in language,” in Emotion: Theory, Research, and Experience, Robert Plutchik and Henry Kellerman, Ed. Academic Press, pp. 113-131.
[31] Z. Xiao, E. Dellandrea, W. Dou, and L. Chen, “Hierarchical classification of emotional speech,” IEEE Transactions on Multimedia, submitted for publication, 2007.
[32] T. Yamada, H. Hashimoto, and N. Tosa, “Pattern recognition of emotion with neural network,” in 21st International Conf. on Industrial Electronics, Control, and Instrumentation, vol. 1, pp. 183-187 Nov. 1995.
[33] Audio Signal Processing and Recognition. [Online]. Available:
http://neural.cs.nthu.edu.tw/jang/books/audioSignalProcessing/
June 22, 2009 [date accessed]
[34] Berlin Database of Emotional Speech. [Online]. Available: http://www.expressive-speech.net/emodb, June 22, 2009 [date accessed]
[35] 求是科技,數位影像處理技術大全,文魁資訊,台北市,民國九十七年。
[36] 蘇木春、張孝德,機器學習:類神經網路、模糊系統以及基因演算法則,修訂二板,全華科技圖書公司,台北,民國九十五年。
[37] 陳萬城,「雜訊環境下強健性語者辨識的新方法」,博士論文,電機工程學系,淡江大學,民國九十八年一月。
|