參考文獻 |
[1] F. Rameau, D.D. Sidibe, C. Demonceaux, and D. Fofi, “Visual tracking with omnidirectional cameras: an efficient approach,” Electronics Letters, Vol. 47, No. 21, pp. 1183-1184, October 2011.
[2] Y. Tang, Y. Li, Senior Member, Shuzhi Sam Ge, Jun Luo, and Hongliang Ren, “Parameterized distortion-invariant feature for robust tracking in omnidirectional vision,” IEEE Trans. on Automation Science And Engineering, Vol. 13, No. 2, pp. 743-756, April 2016.
[3] G. Tong and J. Gu, “Locating objects in spherical panoramic images,” in Proceedings of IEEE International Conference on Robotics and Biomimetics, pp. 818-823, December 2011.
[4] B. S. Kim and J. S. Park, “Estimating deformation factors of planar patterns in spherical panoramic images,” Multimedia Systems, Vol. 23, DOI 10.1007/s00530-016-0513-x, pp. 607-625, April 2016.
[5] K. Chi Liu, Y. T. Shen, and L. G. Chen, “Simple online and realtime tracking with spherical panoramic camera,” in Proceedings of IEEE International Conference on Consumer Electronics, National Taiwan University, pp. 1-6, Jan. 2018.
[6] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Vol. abs/1612.08242, pp. 6517-6525, July 2017. [Online]. Available: http://arxiv.org/abs/1612.08242.
[7] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in Proceedings of IEEE International Conference on Image Processing, Vol. abs/1703.07402, pp. 3645-3649, September 2017. [Online]. Available: http://arxiv.org/abs/1703.07402.
[8] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proceedings of International Conference on Computer Vision, Willow Garage, Menlo Park, California, pp. 256-2571, Nov. 2011.
[9] E. Karami, S. Prasad, and M. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: Performance comparison for distorted images,” in Proceedings of Newfoundland Electrical and Computer Engineering Conference, November 2015.
[10] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” in Proceedings of the European Conference on Computer Vision, pp. 404-417, May 2006.
[11] C. Bao, Y. Wu, H. Ling, and H. Ji, “Real time robust l1 tracker using accelerated proximal gradient approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1830-1837, June 2012.
[12] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM Computing Surveys, Vol. 38, No. 4, pp. 1-45, December 2006.
[13] N. J. Gordon, D. J. Salmond, and A. F. M. Smith, “Novel approach to nonlinear/non-Gaussian Bayesian state estimation,” IEE Proceedings F - Radar and Signal Processing, Vol. 140, pp. 107-113, April 1993.
[14] M. S. Arulampalam, S. Maskell, N. Gordan, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Trans. signal processing, Vol. 50, No. 2, pp. 174-188, Feb. 2002.
[15] K. Nummiaro, E. Koller-Meier, and L. V. Gool, “An adaptive color-based particle filter,” Image and Vision Computing, Vol. 21, No. 1, pp. 99-110, Jan. 2003.
[16] F. Wallhoff, M. Zobl, and G. Rigoll, “Face tracking in meeting room scenarios using omnidirectional views,” in Proceedings of IEEE Conf. on Pattern Recognition, Washington, DC, USA, Vol. 4, pp. 933–936, Aug. 2004.
[17] Z. Zhou, B. Niu, C. Ke, and W. Wu, “Static object tracking in road panoramic videos,” in Proceedings of IEEE International Symposium on Multimedia, pp. 57-64, Dec. 2010.
[18] M. Budagavi, J. Furton, G. Jin, A. Saxena, J. Wilkinson, and A. Dickerson, “360 degrees video coding using region adaptive smoothing,” in Proceedings of IEEE International Conference on Image Processing, pp. 750-754, September 2015.
[19] C. Chen, M.-Y. Liu, O. Tuzel, and J. Xiao, “R-CNN for Small Object Detection,” Springer International Publishing, pp. 214–230, March 2017.
[20] H. Hu, Y. Lin, M. Liu, H. Cheng, Y. Chang, and M. Sun, “Deep 360 pilot: Learning a deep agent for piloting through 360-degree sports video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Vol. abs/1705.01759, pp. 1396-1405, July 2017.
[21] P. Torr and A. Zisserman, “Robust computation and parametrization of multiple view relations,” in Proceedings of International Conference on Computer Vision, pp. 727-732, Jan. 1998.
[22] Y. Ye, E. Alshina, and J. Boyce, ”Algorithm descriptions of projection format conversion and video quality metrics in 360Lib Version 4,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 7th Meeting: Torino, IT, 13–21 July 2017.
[23] X. Corbillon, F. De Simone, and G. Simon, “360-degree video head movement dataset,” in Proceedings of ACM Multimedia Systems, pp. 1-4, June 2017.
[24] https://www.mettle.com/360vr-master-series-free-360-downloads-page.
[25] F. Duanmu, Y. Mao, S. Liu, S. Srinivasan and Y. Wang, “A subjective study of viewer navigation behaviors when watching 360-degree videos on computers,” in Proceedings of IEEE International Conference on Multimedia Expo, San Diego, California, USA, Feb. 2018.
[26] X. Mei and H. Ling, “Robust visual tracking using l1 minimization,” in Proceedings of IEEE International Conference on Computer Vision, pp. 1436-1443, Kyoto, Japan, Sep.-Oct. 2009.
[27] Y. Wu, J.W. Lim, and M.-H. Yang, “Online object tracking: A benchmark,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 2411–2418, June 2013. |