參考文獻 |
[1] M.O.Almasawa, L.A.Elrefaei, and K.Moria, “A survey on deep learning-based person re-identification systems,” IEEE Access, vol. 7, pp. 175228–175247, 2019.
[2] O.Camps et al., “From the Lab to the Real World: Re-identification in an Airport Camera Network,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 3, pp. 540–553, 2017.
[3] R.Iguernaissi, D.Merad, K.Aziz, and P.Drap, “People tracking in multi-camera systems: a review,” Multimed. Tools Appl., vol. 78, no. 8, pp. 10773–10793, 2019.
[4] X.Sun and L.Zheng, “Dissecting person re-identification from the viewpoint of viewpoint,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 608–617.
[5] D.Xu, J.Chen, C.Liang, Z.Wang, and R.Hu, “Cross-view identical part area alignment for person re-identification,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 2462–2466.
[6] Y.Wang et al., “Resource aware person re-identification across multiple resolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8042–8051.
[7] Y.Huang, Z.-J.Zha, X.Fu, and W.Zhang, “Illumination-invariant person re-identification,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 365–373.
[8] M. S.Sarfraz, A.Schumann, A.Eberle, and R.Stiefelhagen, “A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 420–429.
[9] J.Miao, Y.Wu, P.Liu, Y.Ding, and Y.Yang, “Pose-guided feature alignment for occluded person re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 542–551.
[10] H.Huang, D.Li, Z.Zhang, X.Chen, and K.Huang, “Adversarially occluded samples for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5098–5107.
[11] R.Hou, B.Ma, H.Chang, X.Gu, S.Shan, and X.Chen, “Vrstc: Occlusion-free video person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7183–7192.
[12] C.Song, Y.Huang, W.Ouyang, and L.Wang, “Mask-guided contrastive attention model for person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1179–1188.
[13] L.Zheng, Y.Yang, and A. G.Hauptmann, “Person re-identification: Past, present and future,” arXiv Prepr. arXiv1610.02984, 2016.
[14] L.Zheng, L.Shen, L.Tian, S.Wang, J.Wang, and Q.Tian, “Scalable person re-identification: A benchmark,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1116–1124.
[15] L.Liao et al., “A half-precision compressive sensing framework for end-to-end person re-identification,” Neural Comput. Appl., vol. 32, no. 4, pp. 1141–1155, 2020.
[16] K.He, X.Zhang, S.Ren, and J.Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[17] N.Mathur, S.Mathur, D.Mathur, and P.Dadheech, “A Brief Survey of Deep Learning Techniques for Person Re-identification,” in 2020 3rd International Conference on Emerging Technologies in Computer Engineering: Machine Learning and Internet of Things (ICETCE), 2020, pp. 129–138.
[18] K.Zhou, Y.Yang, A.Cavallaro, and T.Xiang, “Omni-scale feature learning for person re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3702–3712.
[19] A.Bochkovskiy, C.-Y.Wang, and H.-Y. M.Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv Prepr. arXiv2004.10934, 2020.
[20] N.Wojke, A.Bewley, and D.Paulus, “Simple online and realtime tracking with a deep association metric,” in 2017 IEEE international conference on image processing (ICIP), 2017, pp. 3645–3649.
[21] S.Liu, L.Qi, H.Qin, J.Shi, and J.Jia, “Path aggregation network for instance segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8759–8768.
[22] T. L.Munea, Y. Z.Jembre, H. T.Weldegebriel, L.Chen, C.Huang, and C.Yang, “The progress of human pose estimation: a survey and taxonomy of models applied in 2D human pose estimation,” IEEE Access, vol. 8, pp. 133330–133348, 2020.
[23] H.Liu, H.Nie, Z.Zhang, and Y.-F.Li, “Anisotropic angle distribution learning for head pose estimation and attention understanding in human-computer interaction,” Neurocomputing, vol. 433, pp. 310–322, 2021.
[24] E.Marchand, H.Uchiyama, and F.Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE Trans. Vis. Comput. Graph., vol. 22, no. 12, pp. 2633–2651, 2015.
[25] Q.Dang, J.Yin, B.Wang, and W.Zheng, “Deep learning based 2d human pose estimation: A survey,” Tsinghua Sci. Technol., vol. 24, no. 6, pp. 663–676, 2019.
[26] C.Zheng et al., “Deep learning-based human pose estimation: A survey,” arXiv Prepr. arXiv2012.13392, 2020.
[27] H.-S.Fang, S.Xie, Y.-W.Tai, and C.Lu, “Rmpe: Regional multi-person pose estimation,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2334–2343.
[28] M.Jaderberg, K.Simonyan, and A.Zisserman, “Spatial transformer networks,” Adv. Neural Inf. Process. Syst., vol. 28, pp. 2017–2025, 2015.
[29] Z.Cao, G.Hidalgo, T.Simon, S.-E.Wei, and Y.Sheikh, “OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 1, pp. 172–186, 2019.
[30] K.Simonyan and A.Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Prepr. arXiv1409.1556, 2014.
[31] M.Ye, J.Shen, G.Lin, T.Xiang, L.Shao, and S. C. H.Hoi, “Deep learning for person re-identification: A survey and outlook,” IEEE Trans. Pattern Anal. Mach. Intell., 2021.
[32] G.Huang, Z.Liu, L.Van DerMaaten, and K. Q.Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
[33] J.Hu, L.Shen, and G.Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
[34] Y.Sun, L.Zheng, Y.Yang, Q.Tian, and S.Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 480–496.
[35] X.Chang, T. M.Hospedales, and T.Xiang, “Multi-level factorisation net for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2109–2118.
[36] F.Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
[37] C.-H.Chen, M.-Y.Lin, and X.-C.Guo, “High-level modeling and synthesis of smart sensor networks for Industrial Internet of Things,” Comput. Electr. Eng., vol. 61, pp. 48–66, 2017.
[38] R. J.Mayer, “IDEF0 function modeling,” Air Force Syst. Command, 1992.
[39] P. F.Felzenszwalb, R. B.Girshick, D.McAllester, and D.Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1627–1645, 2009.
[40] M.Everingham, S. M. A.Eslami, L.VanGool, C. K. I.Williams, J.Winn, and A.Zisserman, “The pascal visual object classes challenge: A retrospective,” Int. J. Comput. Vis., vol. 111, no. 1, pp. 98–136, 2015. |