參考文獻 |
[1] D. Guo, J. Wang, Y. Cui , Z. Wang, and S. Chen, “SiamCAR: siamese fully convolutional classification and regression for visual tracking,” arXiv:1911.07241v2.
[2] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah, “Signature verification using a "Siamese" time delay neural network,” in Proc. 6th Int. Conf. on NIPS, Denver, Colorado, Nov.29 - Dec.2, 1993, pp.737-744.
[3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”arXiv:1512.03385.
[4] T. Dekel, S. Oron, M. Rubinstein, S. Avidan, and W. T. Freeman, “Bestbuddies similarity for robust template matching,” in Proc. of the IEEE Conf. on CVPR, Boston, MA, Jun.7-12, 2015, pp.2021-2029.
[5] I. Talmi, R. Mechrez, and L. Zelnik-Manor, “Template matching with deformable diversity similarity,” arXiv:1612.02190v2.
[6] R. Kat, R. Jevnisek, and S. Avidan, “Matching pixels using cooccurrence statistics,” in Proc. of the IEEE Conf. on CVPR, Salt Lake City, UT, Jun.18-23, 2018, pp.1751-1759.
[7] J. Cheng, Y. Wu, W. Abd-Almageed, and P. Natarajan, “QATM: qualityaware template matching for deep learning,” arXiv:1903.07254v2.
[8] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556v6.
[9] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. S. Torr, “Fully-convolutional siamese networks for object tracking,” arXiv:1606.09549v2.
[10] J. Valmadre, L. Bertinetto, J. F. Henriques, A. Vedaldi, and P. H. S. Torr, “End-to-end representation learning for correlation filter based tracking,” arXiv:1704.06036.
[11] Q. Wang, J. Gao, J. Xing, M. Zhang, and W. Hu, “DCFNet: discriminant correlation filters network for visual tracking,” arXiv:1704.04057.
[12] Q. Guo, W. Feng, C. Zhou, R. Huang, L. Wan, and S. Wang, “Learning dynamic siamese network for visual object tracking,” in Proc. of the IEEE Conf. on CVPR, Venice, Italy, Oct.22-29, 2017, pp.1781-1789.
[13] A. He, C. Luo, X. Tian, and W. Zeng, “A twofold siamese network for real-time object tracking,” arXiv:1802.08817.
[14] B. Li, J. Yan, W. Wu, Z. Zhu, and X. Hu, “High performance visual tracking with siamese region proposal network,” in Proc. of the IEEE Conf. on CVPR, Salt Lake City, UT, Jun.18-23, 2018, pp.8971-8980.
[15] Z. Zhu, Q. Wang, B. Li, W. Wu, J. Yan, and W. Hu, “Distractor-aware siamese networks for visual object tracking,” arXiv:1808.06048.
[16] B. Li, W. Wu Q. Wang, F. Zhang, J. Xing, and J. Yan, “SiamRPN++: evolution of siamese visual tracking with very deep networks,” arXiv:1808.06048.
[17] Q. Wang, L. Zhang, L. Bertinetto, W. Hu, and P. H. S. Torr, “Fast online object tracking and segmentation: a unifying approach,” arXiv:1812.05050v2.
[18] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards realtime object detection with region proposal networks,” arXiv:1506.01497v3.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. of NIPS, Lake Tahoe, Nevada, Dec.3-8, 2012, pp.1-9.
[20] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” arXiv:1703.06870v3.
[21] R. Girshick, “Fast R-CNN,” arXiv:1504.08083v2.
[22] D. Zhou, J. Fang, X. Song, C. Guan, J. Yin, Y. Dai, and R. Yang, “IoU loss for 2D/3D object detection,” arXiv:1908.03851.
[23] H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized intersection over union: a metric and a loss for bounding box regression,”arXiv:1902.09630v2.
[24] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-IoU loss: faster and better learning for bounding box regression,” arXiv:1911.08287.
[25] H. Fan and H. Wang, “Siamese cascaded region proposal networks for real-time visual tracking,” arXiv:1812.06148.
[26] G. Wang, C. Luo, Z. Xiong, and W. Zeng, “SPM-tracker: series-parallel matching for real-time visual object tracking,” arXiv:1904.04452.
[27] Q. Wu, Y. Yan, Y. Liang, Y. Liu, and H. Wang, “DSNet: deep and shallow feature learning for efficient visual tracking,” arXiv:1811.02208.
[28] Z. Zhang and H. Peng, “Deeper and wider Siamese networks for realtime visual tracking,” arXiv:1901.1660v3.
[29] D. Gordon, A. Farhadi, and D. P. Fox, “Re3: Real-time recurrent regression networks for visual tracking of generic objects,” arXiv:1705.06368v3.
[30] D. Vaghela and P. K. Naina, “A review of image mosacing techniques,” arXiv:1405.2539.
[31] A. Alaei and M. Delalandre, “A complete logo detection/recognition system for document images,” in Proc. of IAPR, Tours, France, April 7-10, 2014, pp.324-328.
[32] A. Malti, R. Hartley, A. Bartoli, and J. Kim “Monocular template-based 3D reconstruction of extensible surfaces with local linear elasticity,” in Proc. of the IEEE Conf. on CVPR, Portland, OR, Jun.23-28, 2013, pp.1522-1529.
[33] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.32, no.9, pp.1627-1645, 2010.
[34] A. Neubeck and L. V. Gool, “Efficient non-maximum suppression,” in Proc. of 18th Int. Conf. on Pattern Recognition (ICPR), Hong Kong,Aug.20-24, 2006, pp.850-855.
[35] Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: fully convolutional onestage object detection,” arXiv:1904.01355v5 |