參考文獻 |
[1] D. Grest, J. Woetzel, and R. Koch, “Nonlinear Body Pose Estimation from Depth Images,” in Proc. of DAGM, Vol. 3663, pp. 285–292, 2005.
[2] Y. Zhu and K. Fujimura, “Constrained Optimization for Human Pose Estimation from Depth Sequences,” in Asian Conference on Computer Vision (ACCV), Vol. 4843, pp. 408-418, 2007.
[3] V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real Time Motion Capture Using a Single Time-Of-Flight Camera,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp.1297-1304, 2010.
[4] C. Plagemann, V. Ganapathi, D. Koller, and S. Thrun, “Real-time Identification and Localization of Body Parts from Depth Images,” in IEEE Conf. on Robotics and Automation (ICRA), pp. 3108-3113, 2010.
[5] A. Baak, M. Muller, G. Bharaj, H. P. Seidel, and C. Theobalt, “A Data-Driven Approach for Real-Time Full Body Pose Reconstruction from a Depth Camera,” in IEEE International Conference on Computer Vision (ICCV), pp1092-1099, 2011.
[6] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-Time Human Pose Recognition in Parts from Single Depth Images,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp.755-762, 2011.
[7] R. Girshick, J. Shotton, P. Kohli, A. Criminisi, and A. Fitzgibbon, “Efficient Regression of General-Activity Human Poses from Depth Images,” in IEEE International Conference on Computer Vision (ICCV), pp. 415-422, 2011.
[8] J. Gall, A. Yao, N. Razavi, L. Van Gool, and V. Lempitsky, “Hough Forests for Object Detection, Tracking, and Action Recognition,” in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 33, Issue 11, pp. 2188-2202, 2011.
[9] M. Sun, P. Kohli, and J. Shotton, “Conditional Regression Forests for Human Pose Estimation,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3394-3401, 2012.
[10] A. Abramov, K. Pauwels, J. Papon, F. Worgotter, and B. Dellen, “Depth-supported Real-time Video Segmentation with the Kinect,” in IEEE Workshop on Applications of Computer Vision (WACV), pp. 457-464, 2012.
[11] G. Klein and D. W. Murray, “Parallel Tracking and Mapping for Small AR Workspaces,” in IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), pp. 225-234, 2007.
[12] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: Real-time 3D Reconstruction and Interaction using a Moving Depth Camera,” in ACM Symposium on User Interface Software and Technology (UIST), pp. 559-568, 2011.
[13] H. Lim, S. O. Lee, J. H. Lee, M. H. Sung, Y.W. Cha, H. G. Kim and S. C. Ahn, “Putting Real-World Objects into Virtual World: Fast Automatic Creation of Animatable 3D models with a Consumer Depth Camera,” in International Symposium on Ubiquitous Virtual Reality (ISUVR), pp. 38-41, 2012.
[14] J. Tong, J. Zhou, L. Liu, Z. Pan, and H. Yan, “Scanning 3D Full Human Bodies using Kinects,” in IEEE Transactions on Visualization and Computer Graphics, Vol.18, Issue 4, pp.643-650, 2012.
[15] G. Ye, Y. Liu, N. Hasler, X. Ji, Q. Dai, and C. Theobalt, “Performance Capture of Interacting Characters with Handheld Kinects,” in Proc. of European Conference on Computer Vision (ECCV), pp. 828-841, 2012.
[16] A. Maimone, J. Bidwell, K. Peng, and H. Fuchs, “Enhanced Personal Autostereoscopic Telepresence System using Commodity Depth Cameras,” In Computers & Graphics, Vol. 36, Issue 7, pp. 791-807, 2012.
[17] R. Poppe, “A Survey on Vision-based Human Action Recognition”, in Image and Vision Computing, Vol. 28, Issue 6, pp. 976–990, 2010.
[18] C. Schuldt, I. Laptev and B. Caputo, “Recognizing Human Actions: a Local SVM Approach,” in Proc. of International Conference on Pattern Recognition (ICPR), Vol.3, pp. 32-36, 2004.
[19] K. Mikolajczyk and H. Uemura, “Action Recognition with Motion-Appearance Vocabulary Forest,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 1-8, 2008.
[20] S. Ali and M. Shah, “Human Action Recognition in Videos Using Kinematic Features and Multiple Instance Learning,” in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 32, No. 2, 2010.
[21] A. Yao, J. Gall, and L. Van Gool, “A Hough Transform-based Voting Framework for Action Recognition,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 2061-2068, 2010.
[22] A. Verikas, A. Gelzinis and M. Bacauskiene, “Mining Data with Random Forests:A Survey and Results of new Tests,” in Pattern Recognition, Vol. 44, Issue 2, pp. 330–349, 2011.
[23] H. Wang, A. Klaser, C. Schmid and C. L. Liu, “Action Recognition by Dense Trajectories,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3169-3176, 2011.
[24] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Vol. 1, pp. 886-893, 2005.
[25] N. Dalal, B. Triggs and C. Schmid, “Human Detection using Oriented Histograms of Flow and Appearance,” in Proc. of European Conference on Computer Vision (ECCV), pp. 428-441, 2006.
[26] S. Sadanand and J. J. Corso, “Action Bank: A High-Level Representation of Activity in Video,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 1234-1241, 2012.
[27] A. Klaser, M. Marszałek and C. Schmid, “A Spatio-Temporal Descriptor Based on 3D-Gradients,” in Proc. of British Machine Vision Conference (BMVC), pp. 275:1-10, 2008.
[28] Microsoft Kinect for Windows, http://www.microsoft.com/en-us/kinectforwindows/, 2011. |