博碩士論文 985402015 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:33 、訪客IP:54.226.226.30
姓名 徐勝斌(Sheng-Bin Hsu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 使用bag-of-word特徵進行人臉與行為分析
(Facial and Action Analysis using bag-of-word Features)
相關論文
★ MFNet:基於點雲與RGB影像的多層級特徵融合神經網路之3D車輛偵測★ Multi-Proxy Loss:基於度量學習提出之損失函數用於細粒度圖像檢索
★ 最近特徵線嵌入網路之影像物件辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文提出兩個主題包含人臉與行為分析方法,這兩個主題分別有臉部資訊擷取、表情辨識、跌倒偵測與行為辨識。在臉部資訊擷取部分使用影像處理技術進行人臉特徵擷取並以視覺化方式呈現。包含偵測、擷取、儲存和檢索五種臉部特徵,這五種臉部特徵分別是人臉輪廓、臉頰膚色、法令紋、髮線和黑素細胞的資訊擷取,可用於臉部保養或醫美用途,使用者可以長期記錄這些資訊。表情是人與人之間非語言交流方式之一,例如病患因疼痛產生表情變化也是其中一種呈現給醫療人員重要資訊。因此也可適用於嬰兒照護上,父母可以長期記錄嬰兒的心情變化,例如尿布舒適度、身體狀況等都有可能影響心情。近來越來越多的研究人員開發靜態影像或連續影像的表情自動辨識演算法。然而,目前大多數的表情辨識方法都是利用制式的特徵擷取步驟來取得描述表情特徵。此外對於人類行為與動作辨識在近幾十年有許多相關技術與方法不斷的被提出。因輸入資料為連續影像,因此常擷取包含空間與時間資訊的特徵。而特徵表示大致可以分為兩大類,分別為全域表示法(global representation) 與局部表示法(local representation)用來描述人類行為的特徵資訊。全域表示法是擷取包含人類身體形狀或輪廓特性,因取得較多的行為資訊,使得效果不錯;但其缺點是需要對人體進行前景偵測並精確定位與追蹤,且對於不同視角、雜訊與遮蔽有一定的影響。行為識別對於老年人與小孩在生活上也有多方面應用,例如可以監測老人與小孩每天的特地動作的活動量,來達到健康生活的目的。另一方面,在家裡有許多家具,在這些地方小孩或許會進行跑、跳等危險行為,因此即時偵測與辨識可有效提醒監護人。針對表情與行為辨識,為了可以根據影像本身資訊自動地學習如何描述動態變化(包含時間與空間相關性),我們提出的無監督單層網絡進行局部特徵擷取,此方式可以擷取表情的局部形狀和動態變化進而描述整體人臉表情變化。為了避免精確定位及前景偵測等問題,在局部表示法中可偵測影片空間與時間之興趣點(space-time interest point, STIP),並假設這些興趣點為人類行為的最有訊息之區域擷取動態或靜態特徵來代表人類動作特性。相對於全域特徵描述,在局部表示法中具有較好的旋轉、平移和縮放的不變性,可以有效降低複雜背景、人體形狀完整和攝影機所帶來的影響。此方法也類似於bag-of-features特性,包含局部特徵擷取(local feature extraction)、字典生成(vocabulary generation)、特徵向量表示(feature vector represent)與池化(pooling)等,最後再使用SVM進行辨識。另一個常見的危害生命安全行為是跌倒,不管是因為自己行走不慎發生的跌倒,還是身體不適導致的跌倒,如果沒有及時發現很可能會錯失搶救的黃金時間。由於科技的進步,監控攝影機也隨處可見。因此,可以利用攝影機來監控行人有無發生異常狀態來避免錯失黃金救援時間的機會。本研究提出了多視角的流行空間學習方式,用來分別學習多個角度的行人正常行走狀態。用此方法可以用來辨別不同方向的行人是否發生異常事件。在訓練模組部分使用Locality Preserving Projections (LPP)來建立各個視角的行人模組,並使用Maximum Hasusdorff distance來量測是否發生跌倒事件。最後,在實驗章節將會針對以上主題使用多個資料庫進行正確性評估。
摘要(英) In this study, we proposed two topics include the methods of facial and action analysis. A visual face feature extraction scheme using image processing techniques and visualize report is proposed. Five visual face features, face contours, face colors, smile lines, hairlines, and melanocytes, were detected, extracted, stored, and retrieved for aesthetic medicine. The results of facial analysis information can be long-term recorded for user retrieve. Perception and production of facial emotion is a kind of nonverbal communication between man and man. For example, the face of pain patients will be exposed to the painful emotion. Alarm messages will be sent to medical staff for notification when the system detects emotion of pain from patients. On the other hand, emotion recognition can be applied record of baby′ mood. This application can long-term recorded mood change in every day and report for the parents. There are many reasons could be effect baby′ emotion, such as diaper comfort, physical condition, the temperature of room temperature, etc. Thus, more researchers increasingly interested in developing algorithms for automatic recognition of facial emotion in still image and videos. However, most existing methods for facial emotion recognition utilize off-the-shelf feature extraction methods for classi?cation. Recently, video based human motion analysis and recognition has attracted a great deal of attention due its potential application and wide usage in a variety of areas, such as video surveillance, human-computer interaction, video indexing, video surveillance, sport event analysis, customer attributes, and shopping behavior analysis, etc. Basically, either global or local visual features are used for human action analysis in many published methods. Generally, an action is considered as a volume of video data both in space and time domain. Global features own the global representation and discriminative power. However, they are sensitive to intra-class variation and deformation like the cluttering backgrounds and partial occlusion in action sequences. The accuracy rates will be impacted by the background distortion. Second, the local visual representation of actions is to extraction the local features from interested points in spatial and temporal domains. In addition to the likelihood of falling in this elderly group is relatively high and can be regarded as a life-threatening event. Behavioral analysis can be used to record the activity content for an elderly person or child in a particular area every day. In general, some action is very dangerous in some places for children, for example, run or jump, etc. In addition to monitoring the child’s dangerous behavior, the service can also record the normal behavior, includes wave, bend, walking, etc. Ensure that the elderly or children have enough activity content for healthy living in every day. In order to learn better features of spatiotemporal information for emotion and action representation. In our study, the proposed unsupervised single-layer networks are applied to automatically learn the local feature, which can explicitly depict appearance and dynamic variations caused by facial emotion and human action. To combination the properties of the local visual representation and learning based model automated to extract feature. The local visual representation robust to intra-class variability caused by scale, pose changes, and occlusion, etc. The learning based model and be able to avoid the handcrafted features computed from a local cuboid around interest points. This method is also similar to the bag-of-features feature, including local feature extraction, vocabulary generation, feature vector representation and pooling, etc. Finally, we use a non-linear SVM with RBF kernel to recognize the facial emotion (human action). In addition, falling can cause severe harm to senior citizens. The ideal time for rescuing is immediately after the fall. However, falls are not always detected immediately, therefore detection in real time, using video surveillance systems, could save human life. Nowadays, digital cameras have been installed everywhere. Human activity is monitored using cameras connected to intelligent programs. An alarm can be sent to the administrator when an abnormal event occurs. In this paper, a manifold multi-view-based learning algorithm is proposed for detecting falling events. This algorithm is able to detect people falling in any direction. First, walking patterns at a normal speed are modeled by a locality preserving projection (LPP). Since the duration of a fall cannot easily be segmented from a video, partial temporal windows are matched with the normal walking patterns. The Hausdorff distances are calculated for comparison. In the experiments, falls were effectively detected using the proposed method.
關鍵字(中) ★ 表情辨識
★ 行為辨識
★ 跌倒偵測
★ 臉部分析
關鍵字(英) ★ Facial Expression Recognition
★ Action Recognition
★ Fall Detection
★ Facial Analysis
論文目次 摘 要 i
Abstract iii
誌 謝 v
Content vi
List of Figure viii
List of Table x
Chapter 1 Introduction 1
Chapter 2 Relate Works 14
2.1 Face detection 14
2.2 Facial component location 15
2.3 Interest points detection 17
2.4 Lucas-Kanade optical algorithm 18
Chapter 3 Human Facial Analysis 22
3.1 Extraction of Visual Facial Metadata for Life Recording 23
3.1.1 Generation of health metadata 24
3.1.2 Face shape extraction 24
3.1.3 Hairline detection of alopecia 25
3.1.4 Smile line detection 27
3.1.5 Melanocyte detection 29
3.1.6 Face color extraction 29
3.2 Facial emotion recognition 31
3.2.1 Facial component tracking 31
3.2.2 Facial component feature learning 32
3.2.3 Facial emotion classification 37
Chapter 4 Human Action Analysis 38
4.1 Fall detection using manifold multi-view learning methods 39
4.1.1 Construction of walking models 40
4.1.2 Fall detection using manifold projection and Hausdorff distance 45
4.2 Action recognition 48
4.2.1 Action component feature learning 49
4.2.2 Action classification 52
Chapter 5 Experimental Results 53
5.1 The Experimental of Facial Health Feature 53
5.1.1 Datasets collection 54
5.1.2 Experiments on face shape 56
5.1.3 Experiments on hairline detection of alopecia 58
5.1.4 Experiments on nasolabial fold and melanocyte detection 60
5.2 The experimental of facial emotion recognition 62
5.2.1 The dataset of facial emotion 62
5.2.2 The accuracy rates of various field size and centroids number 63
5.2.3 Comparison of various feature descriptor in terms of classification accuracy 66
5.3 The experimental of fall detection 68
5.3.1 Detection results on dataset NTHU 70
5.3.2 Detection results on dataset NCU 72
5.3.3 Detection results on the UR fall dataset 73
5.3.4 Detection results on dataset LE2I 74
5.4 The experimental of action recognition 77
5.4.1 The datasets of action classification 77
5.4.2 The accuracy rates of various centroids number 79
5.4.3 The accuracy rates of various receptive field size 81
5.4.4 The accuracy rates of various Bag-of-Words number 83
Chapter 6 Conclusions 84
Reference 86
參考文獻 [1] P. W. Wang, Z. J. Ding, C. J. Jiang, and M. C. Zhou, “Design and implementation of a web-service-based public-oriented personalized health care platform,” IEEE Trans. Syst., Man, and Cybern.: Syst., vol. 43, no. 4, pp. 941–957, 2013.
[2] Y. Hata, S. Kobashi, and H. Nakajima, “Human health care system of systems,” IEEE Syst. J., vol. 3, no. 2, pp. 231–238, 2009.
[3] S. Spinsante and E. Gambi, “Remote health monitoring by OSGi technology and digital TV integration,” IEEE Trans. Consum. Electron., vol. 58, no. 4, pp.1434–1441, 2012.
[4] A. Benharref and M. A. Serhani, “Novel cloud and SOA-based framework for E-health monitoring using wireless biosensors,” IEEE J. Biomed. Health Inform., vol. 18, no. 1, pp. 46–55, 2014.
[5] Y. C. Wu et al. “A mobile-phone-based health management system,” Chapter 2, Health Management Different Approaches and Solutions [Online]. Available: http://www.intechopen.com/books/health- management-different-approaches-and-solutions.
[6] M. G. H. AL Zamil, M. Rawashdeh, S. Samarah, M. S. Hossain, A. Alnusair, and S. M. M. Rahman, “An Annotation Technique for In-Home Smart Monitoring Environments,” IEEE Access, vol. 6, pp. 1471–1479, 2017.
[7] C. R. Costa, L. E. Anido-Rifon, and M. J. Fernandez-Iglesias, “An Open Architecture to Support Social and Health Services in a Smart TV Environment,” IEEE J. Biomed. Health Inform., vol. 21, no. 2, pp. 549–560, 2016.
[8] N. A. Shaked, “Avatars and virtual agents – relationship interfaces for the elderly,” Healthc Technol Lett., vol. 4, no. 3, pp. 83–87, 2017.
[9] J. H. Abawajy and M. M. Hassan, “Federated Internet of Things and Cloud Computing Pervasive Patient Health Monitoring System,” IEEE Commun. Mag., vol. 55, no. 1, pp. 48–53, 2017.
[10] K. d. Miguel, A. Brunete, M. Hernando, and E. Gambao, “Home Camera-Based Fall Detection System for the Elderly,” Sensors, vol. 17, no. 12, pp. 1–21, 2017.
[11] S. Kim, S. Yeom, O. J. Kwon, D. Shin, and D. Shin, “Ubiquitous Healthcare System for Analysis of Chronic Patients’ Biological and Lifelog Data,” IEEE Access, vol. 17, no. 12, pp. 1–21, 2017.
[12] L. Ding and A. M. Martinez, “Feature versus context: an approach for precise and detailed detection and delineation of faces and facial features,” IEEE Trans. Pattern Anal. Machine Intell, vol. 32, no. 11, pp. 2022–2037, 2010.
[13] G. Xu and X. Yuan, “Facial features regions locating method,” in Proc. Int. Conf. Signal Process. Syst., Dalian, China, pp. 676–679, 2010.
[14] R. C. Gonzalez, and R.E. Woods, “Digital Image Processing,” Prentice-Hall, Inc., 2nd ed., 2002.
[15] F. Deboeverie, P. Veelaert, and W. Philips, “Face analysis using curve edge maps,” in Proc. Int. Conf. Image Anal. and Process., Ravenna, Italy, pp. 109–118, 2011.
[16] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. 8, no. 6, pp. 679–698, 1986.
[17] Y. Yacoob and L. S. Davis, “Detection and analysis of hair,” IEEE Trans. Pattern Anal. Machine Intell., vol. 28, no. 7, pp. 1164–1169, 2006.
[18] C. Rousset and P. Y. Coulon, “Frequential and color analysis for hair mask segmentation,” in Proc. IEEE Int. Conf. Image Proc., San Diego, CA, USA, pp.2276–2279, 2008.
[19] U. Lipowezky, O. Mamo, and A. Cohen, “Using integrated color and texture features for automatic hair detection,” in Proc. IEEEI, pp. 51–55, 2008.
[20] P. Julian, C. Dehais, F. Lauze, V. Charvillat, A. Bartoli, and A. Choukroun, “Automatic hair detection in the wild,” in Proc. Int. Conf. Pattern Recognit., pp.4617–4620, 2010.
[21] T. F. Cootes, C. J. Taylor, D. Cooper, and J. Graham, “Active shape models - their training and application,” Comput. Vis. Image Underst., vol. 61, no. 1, pp. 38–59, 1995.
[22] C. K. Yang and C. N. Kuo, "Automatically extracting hairstyles from 2D images," in Proc. ISVC, Rethymnon, Crete, Greece, 2013, pp. 406–415.
[23] C. Y. Chang, S. C. Li, P. C. Chung, J. Y. Kuo, and Y. C. Tu, “Automatic facial skin defect detection system,” in Proc. BWCCA, Fukuoka, Japan, 2010, pp. 527–532.
[24] C. Y. Chang and H. Y. Liao, “Automatic facial spots and acnes detection system,” J. Cosmet., Dermatol. Sci. Appl., vol. 3, no. 1, pp.28–35, 2013.
[25] Z. Z. Htike, S. Egerton, and K. Y. Chow, “A monocular view-invariant fall detection system for the elderly in assisted home environments,” Seventh International Conference on Intelligent Environments, pp. 40–46, 2011.
[26] L. Tong, W. Chen, Q. Song, and Y. Ge, “A research on automatic human fall detection method based on wearable inertial force information acquisition system,” IEEE International Conference on Robotics and Biometics, pp. 949–953, 2009.
[27] Q. Li, J. A. Stankovic, M. A. Hanson, A. T. Barth, J. Lach, and G. Zhou, “Accurate, fast fall detection using gyroscopes and accelerometer- derived posture information,” Proc. 6th IEEE Int. Workshop Wearable and Implantable Body Sensor Networks, pp. 138–143, 2009.
[28] D. Litvak, Y. Zigel, and G. Israel “Fall detection of elderly through floor vibrations and sound,” 30th Annual International IEEE EMBS Conference Vancouver, British Columbia, Canada, 2008.
[29] B. Najafi, K. Aminian, F. Loew, Y. Blanc, and Ph. Robert, “Fall risk evaluation in elderly using miniature gyroscope,” 1st Annual International IEEE-EMBS Special Topic Conference on Microtechnologies in Medicine and Biology, pp. 557–561, 2000.
[30] S. Aoyagi, S. Yoshimatsu, M. Oya, Y. Chida, and H. Kobayashi, “On-line distinction methods of human fall motions based on machine learning,” SICE Annual Conference, pp. 1688–1697, 2010.
[31] C. F. Lai, Y. M. Huang, J. H. Park, and H. C. Chao, “Adaptive body posture analysis for elderly-fall detection with multi-sensors,” IEEE Computer Society, pp. 20–30, 2010.
[32] M. Mubashir, L. Shao, and L. Seed,” A survey on fall detection: Principles and approaches,” Neurocomputing, vol. 100, pp. 144–152, 2013.
[33] J. Tao, M. Turjo, M. F. Wong, M. Wang and Y. P. Tan, “Fall incidents detection for intelligent video surveillance,” Proceedings of Fifth International Conference on Information, Communications and Signal Processing (ICICS), pp. 1590–1594, 2005.
[34] C. W. Lin and Z. H. Ling, “Automatic fall incident detection in compressed video for intelligent homecare,” Proceedings of 16th International Conference on Computer Communications and Networks, pp. 1172–1177, 2007.
[35] C. W. Lin, Z. H. Ling, Y. C. Chang, and C. J. Kuo, “Compressed-domain fall incident detection for intelligent home surveillance,” IEEE International Symposium on Circuits and Systems, vol. 4, pp. 3781– 3784, 2005.
[36] H. Foroughi, B. S. Aski, and H. Pourreza, “Intelligent video surveillance for monitoring fall detection of elderly in home environments,” Proceedings of 11th International Conference on Computer and Information Technology, pp. 219–224, 2008.
[37] C. N. Doukas, and I. Maglogiannis, “Emergency fall incidents detection in assisted living environments utilizing motion, sound, and visual perceptual components,” IEEE Trans. on Information Technology in Biomedicine, vol. 15, no. 2, pp. 277–289, 2011.
[38] W. Y. Shieh and J. C. Huang, “Fall incident detection and throughput enhancement in a multi-camera video-surveillance system,” Journal of Medical Engineering & Physics, vol. 34, pp. 954–963, 2012.
[39] J. Willems, G. Debard, B. Vanrumste, and T. Goedem’e, “A video-based algorithm for elderly fall detection”, IFMBE Proceedings, vol. 25/5, pp. 312–315, 2009.
[40] V. Vishwakarma, C. Mandal and S. Sural, “Automatic detection of human fall in video,” Lecture Notes in Computer Science: Pattern Recognition and Machine Intelligence, vol. 4815, pp. 616–623, 2007.
[41] H. Qian, Y. Mao, W. Xiang, and Z. Wang, “Home environment fall detection system based on a cascaded multi-SVM classifier,” IEEE Conference on Control, Automation, Robotics and Vision, pp. 1567–1572, 2008.
[42] Y. T. Chen, Y. C. Lin, and W. H Fang, “A hybrid human fall detection scheme,” IEEE 17th International Conference on Image Processing, pp. 3485–3488, 2010.
[43] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Robust video surveillance for fall detection based on human shape deformation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 5, pp. 611–622, 2011.
[44] D. Anderson, J.M. Keller, M. Skubic, Xi Chen, and Z. He, “Recognizing falls from silhouettes,” in Proceedings of the 28th Annual International Conference of IEEE Engineering in Medicine and Biology Society (EMBS), pp. 6388–6391, 2006.
[45] R. Cucchiara, A. Prati, and R. Vezzani, “An intelligent surveillance system for dangerous situation detection in home environments,” Intelligenza Artificiale, pp. 11–15, 2004.
[46] R. Cucchiara, C. Grana, A. Prati, and R. Vezzani, “Probabilistic posture classification for human-behavior analysis,” IEEE Transactions on Systems, Man and Cybernetics, vol. 35, pp. 42–54, 2005.
[47] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Fall detection from human shape and motion history using video surveillance,” in: Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops, vol. 2, pp. 875–880, 2007.
[48] T. Ogata, J. Tan, and S. Ishikawa, “High-speed human motion recognition based on a motion history image and an eigenspace,” IEICE Trans. Transactions on Information and Systems, pp. 281-289, 2006.
[49] Y. T. Liao, C. L. Huang, and S. C. Hsu, “Slip and fall event detection using Bayesian Belief Network,” Pattern Recognition, vol. 45 pp. 24–32, 2012.
[50] A. Elgammal and C. S. Lee, “Nonlinear manifold learning for dynamic shape and dynamic appearance,” Computer Vision and Image Understanding, vol. 106, pp. 31–46, 2007.
[51] L. Wang and D. Suter, “Learning and matching of dynamic shape manifolds for human action recognition,” IEEE Transactions on Image Processing, vol. 16, no. 6, pp. 1646–1661, 2007.
[52] J. Davis, “Hierarchical motion history images for recognizing human motion,” IEEE Workshop on Detection and Recognition of Events in Video, pp. 39–46, 2001.
[53] D. Weinland, R. Ronfard and E. Boyer, “Free viewpoint action recognition using motion history volumes,” Computer Vision and Image Understanding, vol. 104, no. 2, pp. 249–257, 2006.
[54] H. Meng, N. Pears and C. Bailey, “A human action recognition system for embedded computer vision application,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–6, 2007.
[55] S. Sadanand and J. J. orso, “Action bank: A high-level representation of activity in video,” In ICPR, 2004.
[56] I. Laptev, and T. Lindeberg, “Space-time interest points,” IEEE Conference on Computer Vision, vol. 64, pp. 432–439, 2003.
[57] I. Laptev, ”On space-time interest points,” International Journal of Computer Vision, vol. 64, pp. 107–123, 2005.
[58] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie, “Behavior recognition via sparse spatio-temporal features,” IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72, 2005.
[59] M. Bregonzio, S. Gong, and T. Xiang, “Recognizing action as clouds of space-time interest points,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1948–1955, 2009.
[60] G. Willems, T. Tuytelaars, and L. V. Gool, “An efficient dense and scale-invariant spatio-temporal interest point detector,” European Conference on Computer Vision, vol. 5303, pp. 650-663, 2008.
[61] N. Dalal and B.Triggs, “Histograms of oriented gradients for human detection,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 886–893, 2005.
[62] N. Dalal, B. Triggs, and C. Schmid, “Human detection using oriented histograms of flow and appearance,” European Conference on Computer Vision, vol. 2, pp. 428–441, 2006.
[63] A. Klaser, M. Marsza?ek, C. Schmid, “A spatio-temporal descriptor based on 3D-gradients,” British Machine Vision Conference, pp. 995–1004, 2008.
[64] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2008.
[65] P. Scovanner, S. Ali, M. Shah, “A 3-dimensional sift descriptor and its application to action recognition,” 15th ACM International Conference on Multimedia, pp. 357–360, 2007.
[66] L. Yeffet, L. Wolf, “Local trinary patterns for human action recognition,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 492–497, 2009.
[67] H. Wang , A. Klaser, C. Schmid, and C. L. Liu, “Action recognition by dense trajectories,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 3169–3176, 2011.
[68] H. Wang, A. Klaser, C. Schmid, and C. L. Liu, “Dense trajectories and motion boundary descriptors for action recognition,” International Journal of Computer Vision, vol. 103, pp. 60–79, 2013.
[69] S. Ji, W. Xu, M. Yang, and K. Yu, “3D convolutional neural networks for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 221–231, 2013.
[70] A. Elgammal and C. S. Lee, “Nonlinear manifold learning for dynamic shape and dynamic appearance,” Computer Vision and Image Understanding, vol. 106, pp. 31–46, 2007.
[71] L. Wang and D. Suter, “Learning and matching of dynamic shape manifolds for human action recognition,” IEEE Transactions on Image Processing, vol. 16, no. 6, pp. 1646–1661, 2007.
[72] P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, May 2004.
[73] P. Ekman and W. Friesen, “The Facial Action Coding System: A Technique For The Measurement of Facial Movement,” Consulting Psychologists Press, Palo Alto 1978.
[74] Y. L. Tian, T. Kanade, and J.F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” Pattern Analysis and Machine Intelligence, vol. 23, pp. 97-115, 2001.
[75] P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” Computer Vision and Pattern Recognition Workshops, pp. 94-101, 2010.
[76] W. Li, Q. Q. Ruan, and J. Wan, “Fuzzy nearest feature line-based manifold embedding for facial expression recognition,” Journal of information science and engineering, vol. 29, pp. 329-346, 2013.
[77] X. M. Zhao and S. Zhang, “Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding,” EURASIP Journal on advances in signal processing, vol. 1, no. 20, 2012.
[78] G. Zhao and M. Pietikainen, “Facial expression recognition with spatiotemporal local descriptors,” The first Finnish Symposium for Emotions and Human-Technology Interaction, 2008.
[79] H. Y. Meng and D. Huang, “Automatic Emotional State Detection using Facial Expression Dynamic in Videos,” Smart science, vol. 2, no. 4, pp. 202-208, 2014.
[80] A. B. Ashraf, S. Lucey, J. F. Cohn, T. Chen, Z. Ambadar, K. M. Prkachin, and P.E. Solomon, “The painful face: Pain expression recognition using active appearance models,” Image and Vision Computing, vol. 27, pp. 1788-1796, 2009.
[81] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Machine Intell., vol. 23, no. 6, pp. 681–685, Jun. 2001.
[82] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie, “Behavior recognition via sparse spatio-temporal features,” IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65-72, 2005.
[83] B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proceedings of Imaging Understanding Workshop, pp. 121-130, 1981.
[84] B. D. Lucas, “Generalized Image Matching by the Method of Differences,” doctoral dissertation, tech. report, Robotics Institute, Carnegie Mellon University, July 1984.
[85] Y. C. Wu et al. “A mobile-phone-based health management system,” Chapter 2, Health Management Different Approaches and Solutions [Online]. Available: http://www.intechopen.com/books/health- management-different-approaches-and-solutions
[86] O. Norwood, “Male pattern baldness: classification and Incidence,” South. Med. J., vol. 68, no. 11, pp. 1359–1365, Nov. 1975.
[87] R. E. Fan, P. H. Chen, and C. J. Lin, “Working set selection using second order information for training SVM,” J. of Mach. Learn. Res., vol. 6, pp. 1889–1918, Dec. 2005.
[88] R. C. Gonzalez, and R.E. Woods, “Digital Image Processing,” Prentice-Hall, Inc., 2nd ed., 2002.
[89] N. Otsu, "A threshold selection method from gray-level histogram," IEEE Trans. Systems, Man, and Cybern., vol. 9, no. 1, pp. 62–66, Jan. 1979.
[90] C. C. Han, H. Y. Mark Liao, G. J. Yu, and L. H. Chen, “Fast face detection via morphology-based pre-processing,” Pattern Recog., vol. 33, no. 10, pp. 1701–1712, Oct. 2000.
[91] B. S. Manjunath, P. Salembier, and T. Sikora, “Introduction to MPEG-7: Multimedia Content Description Interface,” Jphn Wiley & Sons, Ltd., 2002.
[92] M. J. Sabin and R. M. Gray, “Global convergence and empirical consistency of the generalized Lloyd algorithm,” IEEE Trans. Inform. Theory, vol. 32, no. 2, pp. 148–155, Mar 1986.
[93] A. Hyvarinen and E. Oja, “Independent component analysis: algorithms and applications,” Neural networks, 13(4-5): pp. 411-430, 2000.
[94] M. Ranzato, A. Krizhevsky, and G. E. Hinton, “Factored 3-way Restricted Boltzmann Machines for Modeling Natural Images,” In AISTATS 13, 2010.
[95] S. Lazebnik, C. Schmid, and J. P once, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” CVPR, 2006.
[96] J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” ICCV, 2005.
[97] S. B. Hsu, C. C. Han, C. T. Hsieh, and K. C. Fan, “Falling and slipping detection for pedestrians using a manifold learning approach,” International Conference on Machine Learning and Cybernetics (ICMLC), vol 03, pp. 1189-1194, July 2013.
[98] Z. Zivkovic, "Improved adaptive Gausian mixture model for background subtraction," International Conference Pattern Recognition, UK, August 2004
[99] X. He, P. Niyogi, “Locality preserving projection,” Advances in Neural Information Processing Systems, vol. 16, pp. 153-160, 2004.
[100] X. He, S. Yan, Y. Hu, H. J. Zhang, “Learning a locality preserving subspace for visual recognition,” IEEE International Conference on Computer Vision, vol. 1, pp. 385-392, October 2003.
[101] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, "XM2VTSDB: the extended M2VTS database," in Proc. AVBPA, Washington, DC, USA, 1999, pp. 72–77.
[102] FG-NET Aging Database, 2002. [Online]. Available: www-prima. inrialpes.fr/FGnet/
[103] K. Ricanek, T. Tesafaye, "MORPH: a longitudinal image database of normal adult age-progression," in Proc. IEEE Int. Conf. Autom. Face Gesture Recognit., 2006, pp. 341–345.
[104] X. Yu, J. Huang, S. Zhang, W. Yan, D.N. Metaxas. "Pose-free facial landmark fitting via optimized part mixtures and deformable shape model" in Proc. 14th IEEE Int. Conf. Computer Vision, Sydney, Australia, 2013, pp. 1944–1951.
[105] S. Yu, D. Tan, and T. Tan, “A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition,” in Proc. 18th Int. Conf. Pattern Recognition, pp. 441-444, 2006. http://www.cbsr.ia.ac.cn/english/ Gait%20Databases.asp
[106] M. Kepski and B. Kwolek: "Fall detection using ceiling-mounted 3D depth camera." In Proc. Int. Conf. on Computer Vision Theory and Applications, vol. 2, pp. 640-647, January 2014. http://fenix.univ.rzeszow.pl/mkepski/ds/uf.html
[107] I. Charfi, J. Miteran, J. Dubois, M. Atri, and R. Tourki, “Definition and performance evaluation of a robust SVM based fall detection solution,” in Proc. 8th Int. Conf. Signal Image Technology and Internet Based Systems (SITIS), pp. 218-224, Nov. 2012. http://le2i.cnrs.fr/Fall-detection-Dataset?lang=fr.
[108] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Actions as Space-Time Shapes,” IEEE International Conference on Computer Vision, pp. 1395-1402, 2005.
指導教授 范國清 韓欽銓(Kuo-Chin Fan Chin-Chuan Han) 審核日期 2018-8-23
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明