參考文獻 |
[1] L. Lu, H.-J. Zhang, and H. Jiang, “Content analysis for audio classification and segmentation,” IEEE Trans. Speech and Audio Processing, vol. 10, no. 7, pp. 504–516, Oct. 2002.
[2] J.-C. Wang, J.-F. Wang, K. W. He, and C.-S. Hsu, “Environmental sound classification using hybrid SVM/KNN classifier and MPEG-7 audio low-level descriptor,” in Proc. Int. Joint Conf. Neural Networks, Vancouver, British Columbia, Canada, July 2006 , pp. 1731–1735.
[3] S. Chu, S. Narayanan, C.-C. J. Kuo and M.J.Mataric “Where am I? Scene recognition for mobile robots using audio features,” IEEE Int. Conf. Multimedia and Expo, Toronto, Ontario, Canada, July 2006, pp.885-888.
[4] J. Huang, “Spatial auditory processing for a hearing robot,” IEEE Int. Conf. Multimedia and Expo, Lausanne, Switzerland, vol. 2, pp. Sep 2002, 253- 256.
[5] E.Wold, T. Blum, D. Keislar, and J. Wheaton, “Content-based classification, search, and retrieval of audio,” IEEE Trans. Multimedia, vol. 3, no. 3, pp. 27–36, Sep. 1996.
[6] J. T. Foote, “Content-based retrieval of music and audio,” in Proc. 1997 SPIE Conf. Multimedia Storage and Archiving Systems II, Dallas, Texas, United States, Nov 1997, pp. 138–147.
[7] V. Peltonen, J. Tuomi, A. Klapuri, J. Huopaniemi, and T.Sorsa, “Computational auditory scene recognition,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Orlando, Florida, United States, May 2002, pp. 1941–1944.
[8] S. Z. Li, “Content-based audio classification and retrieval using the nearest feature line method,” IEEE Trans. Speech and Audio Processing, vol. 8, no. 5, pp. 619–625, Sep. 2000.
[9] G. Guo and S. Z. Li, “Content-based audio classification and retrieval by support vector machines,” IEEE Trans. Neural Networks, vol. 14, no. 1, pp. 209–215, Jan. 2003.
[10] J. Zheng, G. Wei, and C. Yang, “Modified local discriminant bases and its application in audio feature extraction,” in Proc. Int. Forum on Information Technology and Application, Chengdu, China, May 2009, pp. 42–52.
[11] H. M. Hadi, M. Y. Mashor, M. S. Mohamed, and K. B. Tat, “Classification of heart sounds using wavelets and neural networks,” in Proc.5th Int. Conf. Electrical Engineering, Computing Science and Automatic Control, Mexico, Nov 2008, pp.177-180.
[12] Samuel P. Ebenezer, “Classification of Acoustic Emissions Using Modified Matching Pursuit,” EURASIP Journal.Signal Processing, pp.347-357, 2004.
[13] K. Umapathy, S. Krishnan, and S. Jimaa, “Multigroup classification of audio signals using time-frequency parameters,” IEEE Trans. Multimedia, vol. 7, no. 2, pp. 308–315, Apr. 2005.
[14] S. Chu, S. Narayanan, and C.-C. J. Kuo, “Environmental sound recognition with time-frequency audio features,” IEEE Trans. Audio, Speech, and Language Processing, vol. 17, no. 6, pp. 1142–1158, Aug. 2009.
[15] K. Umapathy, S. Krishnan, “Sub-dictionary selection using local discriminant bases algorithm for signal classification,” Canadian Conference on, Electrical and Computer Engineering, Canada vol.4, pp. May 2004, 2001- 2004,.
[16] K. Umapathy, S. Krishnan, “A signal classification approach using time-width vs frequency band sub-energy distributions,” IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Philadelphia, Pennsylvania, USA, vol.5, pp. March 2005, 477-480.
[17] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Processing, vol. 41, no. 12, pp. 3397–3415, Dec. 1993.
[18] 王小川,語音訊號處理,修訂版,全華圖書股份有限公司,台北縣,民國96年。
[19] V. Vapnik and C. Cortes, “Support vector networks,” Mach. Learn, vol. 20, pp. 273–297, 1995.
[20] J. Shlens, “A Tutorial on Principal Component Analysis,” Systems Neurobiology Laboratory, ver. 3.01, Apr. 2005.
[21] A. M. Martinez and A. C. Kak, “PCA versus LDA,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228–233, Feb. 2001.
[22] K. Fukunaga, Introduction to Statistical Pattern Recognition, second ed. Academic Press, 1990.
[23] T. Nwe, S. Fool, and L. De Silva, “Speech Emotion Recognition Using hidden Markov model,” Speech Commun. 41(2003)603-623.
[24] H. Teager and S. Teager, “Evidence for nonlinear production mechanisms in the vocal tract: Speech Production and Speech Modeling, Nato Advanced Institute,” vol. 55, pp.241-261, 1990.
|