參考文獻 |
[1] PolyU Palmprint Database.
[Online].Available:http://www.comp.polyu.edu.hk/~biometrics.
[2] Casia Palmprint Database.
[Online].Available: http://biometrics.idealtest.org/.
[3] Pune-411005(An Autonomous Institute of Government of Maharashtra) College of Engineering. Coep palm print database.
Dataset accessible at: https://www.coep.org.in/resources/coeppalmprintdatabase.
[4] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas, “Volumetric and multi-view CNNs for object classification on 3D data,”in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2016, pp. 5648–5656.
[5] S. Shi, X. Wang, and H. Li, “PointRCNN: 3D object proposal generation and detection from point cloud,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 770–779.
[6] C. Xu, B. Wu, Z. Wang, W. Zhan, P. Vajda, K. Keutzer, and M.Tomizuka, “SqueezeSegV3: Spatially-adaptive convolution for efficient point-cloud segmentation,” in ECCV, 2020.
[7] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proc.IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 652–660.
[8] Y. Chen, V. T. Hu, E. Gavves, T. Mensink, P. Mettes, P. Yang, and C. G. M. Snoek, “Pointmixup: Augmentation for point clouds,” Lecture Notes in Computer Science, 2020, pp. 330–345.
[9] Kim, S., Lee, S., Hwang, D., Lee, J., Hwang, S. J., and Kim, H. J. Point cloud augmentation with weighted local transformations. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , 2021, pp. 548–557.
[10] D. Lee et al., “Regularization strategy for point cloud via rigidly mixed sample,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit.,Jun. 2021, pp. 15900–15909.
[11] S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li, “Pvrcnn: Point-voxel feature set abstraction for 3d object detection,” in CVPR, 2020.
[12] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, “Multi-view convolutional neural networks for 3D shape recognition,” in Proc. Int. Conf. Computer Vision (ICCV), 2015, pp. 945–953.
[13] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas,“Volumetric and multi-view CNNs for object classification on 3D data,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016,pp. 5648–5656.
[14] L. Li, S. Zhu, H. Fu, P. Tan, and C.-L. Tai, “End-to-end learning local multi-view descriptors for 3D point clouds,” in Proc. IEEE/CVF Conf.Comput. Vis. Pattern Recognit. (CVPR), 2020, pp. 1919–1928.
[15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. 25th Int. Conf. Neural Inf. Process. Syst., 2013, pp. 1097–1105.
[17] A. G. Howard, et al., (Apr. 2017).“MobileNets: Efficient convolutional neuralnetworks for mobile vision applications.”
[Online]. Available: https://arxiv.org/abs/1704.04861
[18] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov,and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proc. IEEE/CVF Conf.Comput. Vis. Pattern Recognit., Jun. 2018,
pp. 4510–4520.
[19] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei,
“Imagenet: A large-scale hierarchical image database,” in Proc.
Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2009, pp. 248-255.
[20] A. Krizhevsky and G. Hinton, “Learning Multiple Layers of Features from Tiny Images,” technical report, Univ. of Toronto, 2009.
[21] T.-Y. Lin, et al., “Microsoft COCO: Common objects in context,” in Proc. 13th Eur. Conf. Comput. Vis., 2014, pp. 740–755.
[22] H. Zhang, M. Cisse, Y. Dauphin, and D. Lopez-Paz, “mixup: Beyond Empirical Risk Minimization,” in International Conference on Learning Representations (ICLR), 2018.
[23] HE, K., ZHANG, X., REN, S., AND SUN, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.
[24] M. Lin, Q. Chen, and S. Yan, “Network in network,” in Proc. ICLR, 2014.
[25] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. ICLR, 2015.
[26] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc.CVPR, 2017.
[27] D. Kingma and J. Lei-Ba. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2015.
[28] V. Kanhangad, A. Kumar, and D. Zhang, “Contactless and pose invariant biometric identification using hand surface,” IEEE Trans.Image Process., vol. 20, no. 5, pp. 1415–1424, May 2011.
[29] Minolta Vivid 910 Noncontact 3D Digitizer 2008.
[Online] Available: http://www.konicaminolta.com/instruments/products/3d/non-contact/vivid910/index.html
[30] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, ‘‘ShuffleNet V2: Practical guidelines for efficient CNN architecture design,’’ in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 116–131.
[31] J. Hu, L. Shen, and G. Sun. Squeeze-and-Excitation networks.
arXiv preprint arXiv:1709.01507, 2017. |