參考文獻 |
[1]Liu, Z., Wu, B., Luo, W., Yang, X., Liu, W., & Cheng, K. T. (2018). Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In Proceedings of the European conference on computer vision (ECCV) (pp. 722-737).
[2]Liu, Z., Shen, Z., Savvides, M., & Cheng, K. T. (2020). Reactnet: Towards precise binary neural network with generalized activation functions. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16 (pp. 143-159). Springer International Publishing.
[3]Zhang, Y., Pan, J., Liu, X., Chen, H., Chen, D., & Zhang, Z. (2021, February). FracBNN: Accurate and FPGA-efficient binary neural networks with fractional activations. In The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (pp. 171-182).
[4]Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360.
[5]Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
[6]Zhang, X., Zhou, X., Lin, M., & Sun, J. (2018). Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6848-6856).
[7]Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1251-1258).
[8]Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).
[9]He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[10]Krizhevsky, A., & Hinton, G. (2010). Convolutional deep belief networks on cifar-10. Unpublished manuscript, 40(7), 1-9.
[11]Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). pmlr.
[12]Liu, Z., Shen, Z., Li, S., Helwegen, K., Huang, D., & Cheng, K. T. (2021, July). How do adam and training strategies help bnns optimization. In International conference on machine learning (pp. 6936-6946). PMLR.
[13]Brock, A., De, S., Smith, S. L., & Simonyan, K. (2021, July). High-performance large-scale image recognition without normalization. In International Conference on Machine Learning (pp. 1059-1071). PMLR.
[14]Tu, Z., Chen, X., Ren, P., & Wang, Y. (2022, October). Adabin: Improving binary neural networks with adaptive binary sets. In European conference on computer vision (pp. 379-395). Cham: Springer Nature Switzerland.
[15]Rastegari, M., Ordonez, V., Redmon, J., & Farhadi, A. (2016, September). Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision (pp. 525-542). Cham: Springer International Publishing.
[16]Summers, C., & Dinneen, M. J. (2019). Four things everyone should know to improve batch normalization. arXiv preprint arXiv:1906.03548.
[17]Martinez, B., Yang, J., Bulat, A., & Tzimiropoulos, G. (2020). Training binary neural networks with real-to-binary convolutions. arXiv preprint arXiv:2003.11535.
[18]Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
[19]Zhuang, B., Shen, C., Tan, M., Liu, L., & Reid, I. (2018). Towards effective low-bitwidth convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7920-7928).
[20]Merity, S., Keskar, N. S., & Socher, R. (2017). Regularizing and optimizing LSTM language models. arXiv preprint arXiv:1708.02182.
[21]Brock, A., De, S., & Smith, S. L. (2021). Characterizing signal propagation to close the performance gap in unnormalized resnets. arXiv preprint arXiv:2101.08692.
[22]Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee.
[23]Krizhevsky, A. (n.d.). Learning Multiple Layers of Features from Tiny Images. CIFAR-10 and CIFAR-100 datasets. Retrieved from https://www.cs.toronto.edu/~kriz/cifar.html |