參考文獻 |
[1] Peter Wilson, "Design Recipes for FPGAs (Second Edition), Chapter 9 - Digital Filters", pp. 117-134, Elsevier, 2016.
[2] J. Benesty and P. Duhamel, “A fast exact least mean square adaptive algorithm,” IEEE Trans. Signal Processing, vol. 40, pp. 2904–2920, 1992.
[3] Mohd Zaizu Ilyas, Ali O. Noor, Khairul Anuar Ishak, Aini Hussain, Salina Abdul Samad, "Normalized Least Mean Square Adaptive Noise Cancellation Filtering forSpeaker Verification in Noisy Environments", International Conference on Electronic Design (2008)
[4] https://www.speex.org/
[5] J. S. Soo and K. K. Pang, “Multidelay block frequency domain adaptive filter,” IEEE Trans. Acoust. Speech Signal Process., vol. 38, no. 2, pp. 373–376, Feb. 1990.
[6] Turing, A. M. 1950. Computing Machinery and Intelligence. Mind 59(236): 433–460.
[7] Searle, J. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3:417–57.
[8] W. S. Mcculloch and W. Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics, vol.5, no.4, pp.115-133, Dec. 1943.
[9] F. A. Makinde, C. T. Ako, O. D. Orodu, I. U. Asuquo, "Prediction of crude oil viscosity using feed-forward back-propagation neural network (FFBPNN)," Petroleum and Coal , vol. 54, pp. 120-131, 2012.
[10] D. O. Hebb, “Organization of Behavior,” New York: Wiley & Sons.
[11] Rosenblatt, F. The Perceptron: A Probabilistic Model For Information Storage And Organization In The Brain. Psychological Review. 1958
[12] M. Minsky, S. Papert, “Perceptrons,” Cambridge, MA: MIT Press.
[13] P. J. Werbos, “Beyond regression: new tools for prediction and analysis in the behavioral sciences,” Ph.D. thesis, Harvard University, 1974.
[14] M. Minsky and S. Paper, “Perceptrons,” Cambridge, MA: MIT Press.
[15] J.J.Hopfield, “Neural networks and physical systems with emergent collective computational abilities”, Proc. Nut. Acad. Sci., U.S., vol. 79, pp. 2554-2558, Apr. 1982.
[16] L. F. Lamel, R. H. Kassel, and S. Seneff, “Speech database development: Design and analysis of the acoustic-phonetic corpus,” in Speech Input/Output Assessment and Speech Databases, 1989.
[17] S.Hochreiter, J.Schmidhuber, “Long short-term memory,” Neural computation, 9(8):1735–1780, 1997.
[18] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv:1412.3555 [cs], December 2014.
[19] J. B. Allen and D. A. Berkley, “Image method for efficiently simulating small-room acoustics,” The Journal of the Acoustical Society of America, vol. 65, no. 4, pp. 943–950, 1979.
[20] D. Yu, M. Kolbak, Z.-H. Tan, and J. Jensen, "Permutation invariant training of deep models for speaker-independent multi-talker speech separation," in Proceedings of ICASSP, pp. 241-245, 2017
[21] Y. Wang, A. Narayanan, and D.L. Wang, "On training targets for supervised speech separation," IEEE/ACM Trans. Audio Speech Lang. Proc., vol. 22, pp. 1849-1858, 2014.
[22] TensorFlow: an open source Python package for machine intelligence, https://www.ten-sorflow.org, retrieved Dec. 1, 2016.
[23] J. Dean, et al. “Large-Scale Deep Learning for Building Intelligent Computer Systems,” in Proceedings of the Ninth ACM International Conference on Web Search and Data Min-ing, pp. 1-1, Feb. 2016.
[24] Librosa: an open source Python package for music and audio analysis, https://github.com/librosa, retrieved Dec. 1, 2016.
[25] B. McFee, C. Raffe, D. Liang, D. P. W. Ellis, M. McVicar, E.Battenberg, and O. Nieto, “librosa: Audio and Music Signal Analysis in Python,” in Proceedings of the 14th Python in Conference, Jul. 2015. |