參考文獻 |
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, May 2017.
[2] G. Hinton et al., “Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups,” IEEE Signal Process. Mag., vol. 29, no. 6, pp. 82–97, Nov. 2012.
[3] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” Adv. Neural Inf. Process. Syst., vol. 4, no. January, pp. 3104–3112, Sep. 2014.
[4] R. Raina, A. Madhavan, and A. Y. Ng, “Large-scale deep unsupervised learning using graphics processors,” in Proceedings of the 26th Annual International Conference on Machine Learning - ICML ’09, 2009, pp. 1–8.
[5] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, Sep. 2014.
[6] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, Sep. 2014.
[7] C. Szegedy et al., “Going Deeper with Convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1–9, Sep. 2014.
[8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, Dec. 2015.
[9] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size,” pp. 1–13, Feb. 2016.
[10] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv, Apr. 2017.
[11] M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, pp. 10691–10700, May 2019.
[12] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9908 LNCS, pp. 525–542, Mar. 2016.
[13] Z. Liu, B. Wu, W. Luo, X. Yang, W. Liu, and K.-T. Cheng, “Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11219 LNCS, pp. 747–763, Aug. 2018.
[14] J. Bethge, C. Bartz, H. Yang, Y. Chen, and C. Meinel, “MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy?,” arXiv, Jan. 2020.
[15] S. Mittal, “A survey of FPGA-based accelerators for convolutional neural networks,” Neural Comput. Appl., vol. 32, no. 4, pp. 1109–1139, Feb. 2020.
[16] S. Han et al., “ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA,” Proc. 2017 ACM/SIGDA Int. Symp. Field-Programmable Gate Arrays, pp. 75–84, Dec. 2016.
[17] A. Page, A. Jafari, C. Shea, and T. Mohsenin, “SPARCNet: A hardware accelerator for efficient deployment of sparse convolutional networks,” ACM J. Emerg. Technol. Comput. Syst., vol. 13, no. 3, pp. 1–32, May 2017.
[18] J. Qiu et al., “Going Deeper with Embedded FPGA Platform for Convolutional Neural Network,” in Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Feb. 2016, pp. 26–35.
[19] J. Park and W. Sung, “FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., vol. 2016-May, pp. 1011–1015, Feb. 2016.
[20] S. A. Mirsalari, N. Nazari, S. A. Ansarmohammadi, M. E. Salehi, and S. Ghiasi, “E2BNet: MAC-free yet accurate 2-level binarized neural network accelerator for embedded systems,” J. Real-Time Image Process., Jul. 2021.
[21] R. Zhao et al., “Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs,” in Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Feb. 2017, pp. 15–24.
[22] S. Liang, S. Yin, L. Liu, W. Luk, and S. Wei, “FP-BNN: Binarized neural network on FPGA,” Neurocomputing, vol. 275, pp. 1072–1086, Jan. 2018.
[23] Y. Umuroglu et al., “FINN: A Framework for Fast, Scalable Binarized Neural Network Inference,” FPGA 2017 - Proc. 2017 ACM/SIGDA Int. Symp. Field-Programmable Gate Arrays, no. February, pp. 65–74, Dec. 2016.
[24] Z. Liu, Z. Shen, M. Savvides, and K.-T. Cheng, “ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12359 LNCS, pp. 143–159, Mar. 2020.
[25] M. Courbariaux, Y. Bengio, and J.-P. David, “BinaryConnect: Training Deep Neural Networks with binary weights during propagations,” Adv. Neural Inf. Process. Syst., vol. 2015-Janua, pp. 3123–3131, Nov. 2015.
[26] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1,” Feb. 2016.
[27] C.-H. Chen, M.-Y. Lin, and X.-C. Guo, “High-level modeling and synthesis of smart sensor networks for Industrial Internet of Things,” Comput. Electr. Eng., vol. 61, pp. 48–66, Jul. 2017.
[28] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms,” pp. 1–6, Aug. 2017.
[29] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[30] P. Guo, H. Ma, R. Chen, P. Li, S. Xie, and D. Wang, “FBNA: A Fully Binarized Neural Network Accelerator,” in 2018 28th International Conference on Field Programmable Logic and Applications (FPL), Aug. 2018, pp. 51–513.
[31] W. Mao et al., “Energy-Efficient Machine Learning Accelerator for Binary Neural Networks,” in Proceedings of the 2020 on Great Lakes Symposium on VLSI, Sep. 2020, pp. 77–82. |