參考文獻 |
[1] P. W. Wang, Z. J. Ding, C. J. Jiang, and M. C. Zhou, “Design and implementation of a web-service-based public-oriented personalized health care platform,” IEEE Trans. Syst., Man, and Cybern.: Syst., vol. 43, no. 4, pp. 941–957, 2013.
[2] Y. Hata, S. Kobashi, and H. Nakajima, “Human health care system of systems,” IEEE Syst. J., vol. 3, no. 2, pp. 231–238, 2009.
[3] S. Spinsante and E. Gambi, “Remote health monitoring by OSGi technology and digital TV integration,” IEEE Trans. Consum. Electron., vol. 58, no. 4, pp.1434–1441, 2012.
[4] A. Benharref and M. A. Serhani, “Novel cloud and SOA-based framework for E-health monitoring using wireless biosensors,” IEEE J. Biomed. Health Inform., vol. 18, no. 1, pp. 46–55, 2014.
[5] Y. C. Wu et al. “A mobile-phone-based health management system,” Chapter 2, Health Management Different Approaches and Solutions [Online]. Available: http://www.intechopen.com/books/health- management-different-approaches-and-solutions.
[6] M. G. H. AL Zamil, M. Rawashdeh, S. Samarah, M. S. Hossain, A. Alnusair, and S. M. M. Rahman, “An Annotation Technique for In-Home Smart Monitoring Environments,” IEEE Access, vol. 6, pp. 1471–1479, 2017.
[7] C. R. Costa, L. E. Anido-Rifon, and M. J. Fernandez-Iglesias, “An Open Architecture to Support Social and Health Services in a Smart TV Environment,” IEEE J. Biomed. Health Inform., vol. 21, no. 2, pp. 549–560, 2016.
[8] N. A. Shaked, “Avatars and virtual agents – relationship interfaces for the elderly,” Healthc Technol Lett., vol. 4, no. 3, pp. 83–87, 2017.
[9] J. H. Abawajy and M. M. Hassan, “Federated Internet of Things and Cloud Computing Pervasive Patient Health Monitoring System,” IEEE Commun. Mag., vol. 55, no. 1, pp. 48–53, 2017.
[10] K. d. Miguel, A. Brunete, M. Hernando, and E. Gambao, “Home Camera-Based Fall Detection System for the Elderly,” Sensors, vol. 17, no. 12, pp. 1–21, 2017.
[11] S. Kim, S. Yeom, O. J. Kwon, D. Shin, and D. Shin, “Ubiquitous Healthcare System for Analysis of Chronic Patients’ Biological and Lifelog Data,” IEEE Access, vol. 17, no. 12, pp. 1–21, 2017.
[12] L. Ding and A. M. Martinez, “Feature versus context: an approach for precise and detailed detection and delineation of faces and facial features,” IEEE Trans. Pattern Anal. Machine Intell, vol. 32, no. 11, pp. 2022–2037, 2010.
[13] G. Xu and X. Yuan, “Facial features regions locating method,” in Proc. Int. Conf. Signal Process. Syst., Dalian, China, pp. 676–679, 2010.
[14] R. C. Gonzalez, and R.E. Woods, “Digital Image Processing,” Prentice-Hall, Inc., 2nd ed., 2002.
[15] F. Deboeverie, P. Veelaert, and W. Philips, “Face analysis using curve edge maps,” in Proc. Int. Conf. Image Anal. and Process., Ravenna, Italy, pp. 109–118, 2011.
[16] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. 8, no. 6, pp. 679–698, 1986.
[17] Y. Yacoob and L. S. Davis, “Detection and analysis of hair,” IEEE Trans. Pattern Anal. Machine Intell., vol. 28, no. 7, pp. 1164–1169, 2006.
[18] C. Rousset and P. Y. Coulon, “Frequential and color analysis for hair mask segmentation,” in Proc. IEEE Int. Conf. Image Proc., San Diego, CA, USA, pp.2276–2279, 2008.
[19] U. Lipowezky, O. Mamo, and A. Cohen, “Using integrated color and texture features for automatic hair detection,” in Proc. IEEEI, pp. 51–55, 2008.
[20] P. Julian, C. Dehais, F. Lauze, V. Charvillat, A. Bartoli, and A. Choukroun, “Automatic hair detection in the wild,” in Proc. Int. Conf. Pattern Recognit., pp.4617–4620, 2010.
[21] T. F. Cootes, C. J. Taylor, D. Cooper, and J. Graham, “Active shape models - their training and application,” Comput. Vis. Image Underst., vol. 61, no. 1, pp. 38–59, 1995.
[22] C. K. Yang and C. N. Kuo, "Automatically extracting hairstyles from 2D images," in Proc. ISVC, Rethymnon, Crete, Greece, 2013, pp. 406–415.
[23] C. Y. Chang, S. C. Li, P. C. Chung, J. Y. Kuo, and Y. C. Tu, “Automatic facial skin defect detection system,” in Proc. BWCCA, Fukuoka, Japan, 2010, pp. 527–532.
[24] C. Y. Chang and H. Y. Liao, “Automatic facial spots and acnes detection system,” J. Cosmet., Dermatol. Sci. Appl., vol. 3, no. 1, pp.28–35, 2013.
[25] Z. Z. Htike, S. Egerton, and K. Y. Chow, “A monocular view-invariant fall detection system for the elderly in assisted home environments,” Seventh International Conference on Intelligent Environments, pp. 40–46, 2011.
[26] L. Tong, W. Chen, Q. Song, and Y. Ge, “A research on automatic human fall detection method based on wearable inertial force information acquisition system,” IEEE International Conference on Robotics and Biometics, pp. 949–953, 2009.
[27] Q. Li, J. A. Stankovic, M. A. Hanson, A. T. Barth, J. Lach, and G. Zhou, “Accurate, fast fall detection using gyroscopes and accelerometer- derived posture information,” Proc. 6th IEEE Int. Workshop Wearable and Implantable Body Sensor Networks, pp. 138–143, 2009.
[28] D. Litvak, Y. Zigel, and G. Israel “Fall detection of elderly through floor vibrations and sound,” 30th Annual International IEEE EMBS Conference Vancouver, British Columbia, Canada, 2008.
[29] B. Najafi, K. Aminian, F. Loew, Y. Blanc, and Ph. Robert, “Fall risk evaluation in elderly using miniature gyroscope,” 1st Annual International IEEE-EMBS Special Topic Conference on Microtechnologies in Medicine and Biology, pp. 557–561, 2000.
[30] S. Aoyagi, S. Yoshimatsu, M. Oya, Y. Chida, and H. Kobayashi, “On-line distinction methods of human fall motions based on machine learning,” SICE Annual Conference, pp. 1688–1697, 2010.
[31] C. F. Lai, Y. M. Huang, J. H. Park, and H. C. Chao, “Adaptive body posture analysis for elderly-fall detection with multi-sensors,” IEEE Computer Society, pp. 20–30, 2010.
[32] M. Mubashir, L. Shao, and L. Seed,” A survey on fall detection: Principles and approaches,” Neurocomputing, vol. 100, pp. 144–152, 2013.
[33] J. Tao, M. Turjo, M. F. Wong, M. Wang and Y. P. Tan, “Fall incidents detection for intelligent video surveillance,” Proceedings of Fifth International Conference on Information, Communications and Signal Processing (ICICS), pp. 1590–1594, 2005.
[34] C. W. Lin and Z. H. Ling, “Automatic fall incident detection in compressed video for intelligent homecare,” Proceedings of 16th International Conference on Computer Communications and Networks, pp. 1172–1177, 2007.
[35] C. W. Lin, Z. H. Ling, Y. C. Chang, and C. J. Kuo, “Compressed-domain fall incident detection for intelligent home surveillance,” IEEE International Symposium on Circuits and Systems, vol. 4, pp. 3781– 3784, 2005.
[36] H. Foroughi, B. S. Aski, and H. Pourreza, “Intelligent video surveillance for monitoring fall detection of elderly in home environments,” Proceedings of 11th International Conference on Computer and Information Technology, pp. 219–224, 2008.
[37] C. N. Doukas, and I. Maglogiannis, “Emergency fall incidents detection in assisted living environments utilizing motion, sound, and visual perceptual components,” IEEE Trans. on Information Technology in Biomedicine, vol. 15, no. 2, pp. 277–289, 2011.
[38] W. Y. Shieh and J. C. Huang, “Fall incident detection and throughput enhancement in a multi-camera video-surveillance system,” Journal of Medical Engineering & Physics, vol. 34, pp. 954–963, 2012.
[39] J. Willems, G. Debard, B. Vanrumste, and T. Goedem’e, “A video-based algorithm for elderly fall detection”, IFMBE Proceedings, vol. 25/5, pp. 312–315, 2009.
[40] V. Vishwakarma, C. Mandal and S. Sural, “Automatic detection of human fall in video,” Lecture Notes in Computer Science: Pattern Recognition and Machine Intelligence, vol. 4815, pp. 616–623, 2007.
[41] H. Qian, Y. Mao, W. Xiang, and Z. Wang, “Home environment fall detection system based on a cascaded multi-SVM classifier,” IEEE Conference on Control, Automation, Robotics and Vision, pp. 1567–1572, 2008.
[42] Y. T. Chen, Y. C. Lin, and W. H Fang, “A hybrid human fall detection scheme,” IEEE 17th International Conference on Image Processing, pp. 3485–3488, 2010.
[43] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Robust video surveillance for fall detection based on human shape deformation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 5, pp. 611–622, 2011.
[44] D. Anderson, J.M. Keller, M. Skubic, Xi Chen, and Z. He, “Recognizing falls from silhouettes,” in Proceedings of the 28th Annual International Conference of IEEE Engineering in Medicine and Biology Society (EMBS), pp. 6388–6391, 2006.
[45] R. Cucchiara, A. Prati, and R. Vezzani, “An intelligent surveillance system for dangerous situation detection in home environments,” Intelligenza Artificiale, pp. 11–15, 2004.
[46] R. Cucchiara, C. Grana, A. Prati, and R. Vezzani, “Probabilistic posture classification for human-behavior analysis,” IEEE Transactions on Systems, Man and Cybernetics, vol. 35, pp. 42–54, 2005.
[47] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Fall detection from human shape and motion history using video surveillance,” in: Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops, vol. 2, pp. 875–880, 2007.
[48] T. Ogata, J. Tan, and S. Ishikawa, “High-speed human motion recognition based on a motion history image and an eigenspace,” IEICE Trans. Transactions on Information and Systems, pp. 281-289, 2006.
[49] Y. T. Liao, C. L. Huang, and S. C. Hsu, “Slip and fall event detection using Bayesian Belief Network,” Pattern Recognition, vol. 45 pp. 24–32, 2012.
[50] A. Elgammal and C. S. Lee, “Nonlinear manifold learning for dynamic shape and dynamic appearance,” Computer Vision and Image Understanding, vol. 106, pp. 31–46, 2007.
[51] L. Wang and D. Suter, “Learning and matching of dynamic shape manifolds for human action recognition,” IEEE Transactions on Image Processing, vol. 16, no. 6, pp. 1646–1661, 2007.
[52] J. Davis, “Hierarchical motion history images for recognizing human motion,” IEEE Workshop on Detection and Recognition of Events in Video, pp. 39–46, 2001.
[53] D. Weinland, R. Ronfard and E. Boyer, “Free viewpoint action recognition using motion history volumes,” Computer Vision and Image Understanding, vol. 104, no. 2, pp. 249–257, 2006.
[54] H. Meng, N. Pears and C. Bailey, “A human action recognition system for embedded computer vision application,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–6, 2007.
[55] S. Sadanand and J. J. orso, “Action bank: A high-level representation of activity in video,” In ICPR, 2004.
[56] I. Laptev, and T. Lindeberg, “Space-time interest points,” IEEE Conference on Computer Vision, vol. 64, pp. 432–439, 2003.
[57] I. Laptev, ”On space-time interest points,” International Journal of Computer Vision, vol. 64, pp. 107–123, 2005.
[58] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie, “Behavior recognition via sparse spatio-temporal features,” IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72, 2005.
[59] M. Bregonzio, S. Gong, and T. Xiang, “Recognizing action as clouds of space-time interest points,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1948–1955, 2009.
[60] G. Willems, T. Tuytelaars, and L. V. Gool, “An efficient dense and scale-invariant spatio-temporal interest point detector,” European Conference on Computer Vision, vol. 5303, pp. 650-663, 2008.
[61] N. Dalal and B.Triggs, “Histograms of oriented gradients for human detection,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 886–893, 2005.
[62] N. Dalal, B. Triggs, and C. Schmid, “Human detection using oriented histograms of flow and appearance,” European Conference on Computer Vision, vol. 2, pp. 428–441, 2006.
[63] A. Klaser, M. Marsza?ek, C. Schmid, “A spatio-temporal descriptor based on 3D-gradients,” British Machine Vision Conference, pp. 995–1004, 2008.
[64] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2008.
[65] P. Scovanner, S. Ali, M. Shah, “A 3-dimensional sift descriptor and its application to action recognition,” 15th ACM International Conference on Multimedia, pp. 357–360, 2007.
[66] L. Yeffet, L. Wolf, “Local trinary patterns for human action recognition,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 492–497, 2009.
[67] H. Wang , A. Klaser, C. Schmid, and C. L. Liu, “Action recognition by dense trajectories,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 3169–3176, 2011.
[68] H. Wang, A. Klaser, C. Schmid, and C. L. Liu, “Dense trajectories and motion boundary descriptors for action recognition,” International Journal of Computer Vision, vol. 103, pp. 60–79, 2013.
[69] S. Ji, W. Xu, M. Yang, and K. Yu, “3D convolutional neural networks for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 221–231, 2013.
[70] A. Elgammal and C. S. Lee, “Nonlinear manifold learning for dynamic shape and dynamic appearance,” Computer Vision and Image Understanding, vol. 106, pp. 31–46, 2007.
[71] L. Wang and D. Suter, “Learning and matching of dynamic shape manifolds for human action recognition,” IEEE Transactions on Image Processing, vol. 16, no. 6, pp. 1646–1661, 2007.
[72] P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, May 2004.
[73] P. Ekman and W. Friesen, “The Facial Action Coding System: A Technique For The Measurement of Facial Movement,” Consulting Psychologists Press, Palo Alto 1978.
[74] Y. L. Tian, T. Kanade, and J.F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” Pattern Analysis and Machine Intelligence, vol. 23, pp. 97-115, 2001.
[75] P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” Computer Vision and Pattern Recognition Workshops, pp. 94-101, 2010.
[76] W. Li, Q. Q. Ruan, and J. Wan, “Fuzzy nearest feature line-based manifold embedding for facial expression recognition,” Journal of information science and engineering, vol. 29, pp. 329-346, 2013.
[77] X. M. Zhao and S. Zhang, “Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding,” EURASIP Journal on advances in signal processing, vol. 1, no. 20, 2012.
[78] G. Zhao and M. Pietikainen, “Facial expression recognition with spatiotemporal local descriptors,” The first Finnish Symposium for Emotions and Human-Technology Interaction, 2008.
[79] H. Y. Meng and D. Huang, “Automatic Emotional State Detection using Facial Expression Dynamic in Videos,” Smart science, vol. 2, no. 4, pp. 202-208, 2014.
[80] A. B. Ashraf, S. Lucey, J. F. Cohn, T. Chen, Z. Ambadar, K. M. Prkachin, and P.E. Solomon, “The painful face: Pain expression recognition using active appearance models,” Image and Vision Computing, vol. 27, pp. 1788-1796, 2009.
[81] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Machine Intell., vol. 23, no. 6, pp. 681–685, Jun. 2001.
[82] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie, “Behavior recognition via sparse spatio-temporal features,” IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65-72, 2005.
[83] B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proceedings of Imaging Understanding Workshop, pp. 121-130, 1981.
[84] B. D. Lucas, “Generalized Image Matching by the Method of Differences,” doctoral dissertation, tech. report, Robotics Institute, Carnegie Mellon University, July 1984.
[85] Y. C. Wu et al. “A mobile-phone-based health management system,” Chapter 2, Health Management Different Approaches and Solutions [Online]. Available: http://www.intechopen.com/books/health- management-different-approaches-and-solutions
[86] O. Norwood, “Male pattern baldness: classification and Incidence,” South. Med. J., vol. 68, no. 11, pp. 1359–1365, Nov. 1975.
[87] R. E. Fan, P. H. Chen, and C. J. Lin, “Working set selection using second order information for training SVM,” J. of Mach. Learn. Res., vol. 6, pp. 1889–1918, Dec. 2005.
[88] R. C. Gonzalez, and R.E. Woods, “Digital Image Processing,” Prentice-Hall, Inc., 2nd ed., 2002.
[89] N. Otsu, "A threshold selection method from gray-level histogram," IEEE Trans. Systems, Man, and Cybern., vol. 9, no. 1, pp. 62–66, Jan. 1979.
[90] C. C. Han, H. Y. Mark Liao, G. J. Yu, and L. H. Chen, “Fast face detection via morphology-based pre-processing,” Pattern Recog., vol. 33, no. 10, pp. 1701–1712, Oct. 2000.
[91] B. S. Manjunath, P. Salembier, and T. Sikora, “Introduction to MPEG-7: Multimedia Content Description Interface,” Jphn Wiley & Sons, Ltd., 2002.
[92] M. J. Sabin and R. M. Gray, “Global convergence and empirical consistency of the generalized Lloyd algorithm,” IEEE Trans. Inform. Theory, vol. 32, no. 2, pp. 148–155, Mar 1986.
[93] A. Hyvarinen and E. Oja, “Independent component analysis: algorithms and applications,” Neural networks, 13(4-5): pp. 411-430, 2000.
[94] M. Ranzato, A. Krizhevsky, and G. E. Hinton, “Factored 3-way Restricted Boltzmann Machines for Modeling Natural Images,” In AISTATS 13, 2010.
[95] S. Lazebnik, C. Schmid, and J. P once, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” CVPR, 2006.
[96] J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” ICCV, 2005.
[97] S. B. Hsu, C. C. Han, C. T. Hsieh, and K. C. Fan, “Falling and slipping detection for pedestrians using a manifold learning approach,” International Conference on Machine Learning and Cybernetics (ICMLC), vol 03, pp. 1189-1194, July 2013.
[98] Z. Zivkovic, "Improved adaptive Gausian mixture model for background subtraction," International Conference Pattern Recognition, UK, August 2004
[99] X. He, P. Niyogi, “Locality preserving projection,” Advances in Neural Information Processing Systems, vol. 16, pp. 153-160, 2004.
[100] X. He, S. Yan, Y. Hu, H. J. Zhang, “Learning a locality preserving subspace for visual recognition,” IEEE International Conference on Computer Vision, vol. 1, pp. 385-392, October 2003.
[101] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, "XM2VTSDB: the extended M2VTS database," in Proc. AVBPA, Washington, DC, USA, 1999, pp. 72–77.
[102] FG-NET Aging Database, 2002. [Online]. Available: www-prima. inrialpes.fr/FGnet/
[103] K. Ricanek, T. Tesafaye, "MORPH: a longitudinal image database of normal adult age-progression," in Proc. IEEE Int. Conf. Autom. Face Gesture Recognit., 2006, pp. 341–345.
[104] X. Yu, J. Huang, S. Zhang, W. Yan, D.N. Metaxas. "Pose-free facial landmark fitting via optimized part mixtures and deformable shape model" in Proc. 14th IEEE Int. Conf. Computer Vision, Sydney, Australia, 2013, pp. 1944–1951.
[105] S. Yu, D. Tan, and T. Tan, “A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition,” in Proc. 18th Int. Conf. Pattern Recognition, pp. 441-444, 2006. http://www.cbsr.ia.ac.cn/english/ Gait%20Databases.asp
[106] M. Kepski and B. Kwolek: "Fall detection using ceiling-mounted 3D depth camera." In Proc. Int. Conf. on Computer Vision Theory and Applications, vol. 2, pp. 640-647, January 2014. http://fenix.univ.rzeszow.pl/mkepski/ds/uf.html
[107] I. Charfi, J. Miteran, J. Dubois, M. Atri, and R. Tourki, “Definition and performance evaluation of a robust SVM based fall detection solution,” in Proc. 8th Int. Conf. Signal Image Technology and Internet Based Systems (SITIS), pp. 218-224, Nov. 2012. http://le2i.cnrs.fr/Fall-detection-Dataset?lang=fr.
[108] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Actions as Space-Time Shapes,” IEEE International Conference on Computer Vision, pp. 1395-1402, 2005. |