博碩士論文 985402005 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:18 、訪客IP:18.118.7.85
姓名 高巧汶(Chiao-Wen Kao)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 凝視訊息視覺化與分析
(Gaze Information Visualization and Analysis)
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 感官是接收外界訊息重要來源之一,採用凝視訊息(gaze information)為基礎,以了解人類的視覺行為(visual behavior)及認知行為(cognitive behavior)一種主要且有效的方法。由於數位資料的多樣化,必須根據不同素材的特性,研究提出不同的視覺對映方法,才能取得閱讀者真實的感興趣內容。視覺化是一種實體化視覺行為的方法,有助讓研究者更清楚地了解凝視訊息及觀看內容之間的特性或關係。因此,針對閱讀者觀看不同的素材取得之凝視訊息加以視覺化與分析,是一個具有吸引力的研究議題。
在本論文中,素材分為三類,包含:靜態(static)、階段性靜態(instant static)及動態(dynamic)。除靜態素材,僅能基於視覺密度資訊,利用一般常見的視覺化方式,呈現感興趣區域(area of interest)、視覺落點熱區(heat map)或觀看順序等靜態統計資料外;為了非僅侷限於視覺化統計資料,在階段性靜態素材方面,為在具人為操控特性的網頁素材下,取得閱聽者關注物件內容,本論文提出了有意義物件內容擷取方法;在動態素材方面,則提出一種新的視覺行為視覺化模型,稱為Note Video,以閱讀者在觀看影片過程中,所產生的關注物件為主軸,利用短片式的方式呈現。為達成此目標,本論文提出以視覺量測指標為基礎之自動化關注物件追蹤法,清楚且準確地呈現閱讀者觀看動態素材的視覺行為。
而除了客觀的因素(objective factors),如不同素材對於視覺行為具有影響外,另外人的主觀因素(subjective factors),如興趣或性別,也會影響凝視行為。而為減少在資料收集過程中人為介入的因素,本論文提出以人臉五官為特徵之適應型性別辨識。因此,本論文除探討不同素材的因素外,也加入探討性別對於凝視行為的影響。
摘要(英) The visual is one of the important perceptions that human assimilates information from their surroundings. Through analyzing gaze information is a way to explore human visual behaviors and cognitive behaviors. In order to explicitly obtain the content of interested to the audience, the various of gaze mapping functions need to be proposed in accordance with the characteristics of different materials with the diversification of digital information. Visualization is an effective method to concrete the abstract visual behavior. It can help researchers better understand the characteristics or relationships between gaze information and the attended content. Therefore, it could be an attractive issue to study visualization with gaze analysis information in different stimulus conditions for understanding the cognitive behaviors.
These stimulus conditions are classified into three categories, including static, instant static and dynamic stimulus in this dissertation. For the static stimulus, the general visualization patterns, such as areas of interest, hot spots, or foci trajectory, can only be displayed based on the gaze density information. For not only limited to visualizing the statistical data, this dissertation proposes a method for extracting the content of meaningful objects under the instant static stimulus condition. The advantage of this method is the capability to obtain the content of focused object in the web-based, user-controlled, stimulus environment. In terms of dynamic stimulus condition, a new visualization pattern is proposed, called as Note Video, using numerous mini episodes to represent the visual behaviors. The content of these episodes is associated to the focused object. To achieve these mini episodes, the gaze-based automatic focused object tracking (AFOT) method is proposed to clearly and accurately present the visual behaviors while the audience watching the video.
In addition to the objective factors, these stimuli, that influence visual behaviors, the subjective factors, such as gender or interest, could also be examined. For reducing the human intervention during the data collection, the gender classification based on the facial component is proposed to detect the gender of the audience. Consequently, this dissertation discusses factors that influence visual behaviors, not merely the various stimuli but also the gender as well.
關鍵字(中) ★ 凝視訊息
★ 動態素材
★ Note Video
★ 視覺行為
★ 性別辨識
關鍵字(英) ★ gaze information
★ dynamic stimuli
★ Note Video
★ visual behavior
★ gender classification
論文目次 摘要 i
Abstract ii
誌謝 iv
Table of Contents v
List of Figures viii
List of Tables xii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Organization of this dissertation 1
1.3 Contribution of this dissertation 6
Chapter 2 Related Works 8
2.1 Information Processing Theory 8
2.2 Eye tracking techniques 11
2.3 Object tracking techniques 13
2.4 Data visualization 17
2.5 Gender classification method 21
Chapter 3 The Gaze Visualization Techniques under the Static and Instant Static Stimulus 23
3.1 Audience’s Attention preferences to the layout compositions of photos and texts 23
3.1.1 Experimental results and discussions 27
3.1.2 Summary 31
3.2 Meaningful object extraction for webpages 31
3.2.1 Interested object mapping function 32
3.2.2 Meaningful object filter 34
3.2.3 Experimental results of extracting the meaningful object for website 38
3.3 Summary 41
Chapter 4 The Gaze Visualization Techniques under the Dynamic Stimulus 43
4.1 Design Overview of Note Video 43
4.2 Obtaining Mini Episode 47
4.2.1 Estimating and labeling gaze point 47
4.2.2 The Extended object tracking method - AFOT 49
4.3 Establishing Note Video 57
4.3.1 Optimization of the Mini Episodes 57
4.3.2 Accomplish the Note Video 59
4.4 Experimental results 61
4.4.1 The accuracy of proposed object tracking method 61
4.4.2 Overview of the Note video of experiment 67
4.4.3 Obtaining Mini Episode 71
4.4.4 Visualization output-Note Video 73
4.5 Summary 77
Chapter 5 CFMC for Gender Classification 78
5.1 Overview of CFMC 78
5.2 Preprocessing procedure to segment the facial components 79
5.3 Feature extraction 82
5.3.1 Hairstyle discrimination 83
5.3.2 Training the classifiers 84
5.4 Dynamic fusion decision 86
5.5 Experimental Results for gender recognition 87
5.5.1 Accuracy on Single classifier 89
5.5.2 Accuracy of multiple classifiers with dynamic fusion decision 90
5.5.3 Robustness of adaptive fusion decision 94
5.6 Summary 96
Chapter 6 The Gender Differences in Selective Attention 97
6.1 The example of educational instruction video 97
6.2 The study of commercial video 100
Chapter 7 Conclusions 105
References 107
參考文獻 [1] R. C. Atkinson and R. M. Shiffrin, "Human memory: A proposal system and its control processes," The Psychology of Learning and Motivation, Vol. 2, 1968.
[2] J. M.Henderson and A. Hollingworth, "High-level scene perception," Annual Review of Psychology, Vol. 50, No. 1, pp. 243-271, 1999.
[3] K. Rayner, "Eye movements in reading and information processing: 20 years of research," Psychological Bulletin, Vol. 124, No. 3, pp. 372-422, 1998.
[4] D. M. Levi, S. A. Klein and A. P. Aitsebaomo, "Vernier acuity, crowding and cortical magnification," Vision Research, Vol. 25, No. 7, pp. 963-977, 1985.
[5] K. Rayner, "Eye movements and visual cognition," Springer Series in Neuropsychology, 1922.
[6] J.M.Henderson and A. Hollingworth, "Eye movements, visual memory and scene representation," Michigan State University Eye Movement Laboratory Technical Report, No. 5, pp. 1-17, 2000.
[7] H. C. Chen, H. D. Lai and F. C. Chiu., "Eye tracking technology for learning and education," Journal of Research in Education Sciences, Vol. 55, No. 4, pp. 39-68, Dec. 2010.
[8] E. Ozcelika, I. Arslan-AribIsmahan and K. Cagiltay, "Why does signaling enhance multimedia learning? Evidence from eye movements," Computers in Human Behavior, Vol. 26, No. 1, pp. 110-117, 2010.
[9] J. M.Boucheix and R. K. Lowe, "An eye tracking comparison of external pointing cues and internal continuous cues in learning with complex animations," Learning and Instruction, Vol. 20, No. 2, pp. 123–135, 2010.
[10] E. Ozcelik, T. Karakus, E. Kursun and K. Cagiltay, "An eye-tracking study of how color coding affects multimedia learning," Computers and Education, Vol. 53, No. 2, pp. 445-453, 2009.
[11] K. Rayner, "Eye movements and attention in reading, scene perception, and visual search," Quarterly Journal of Experimental Psychology, Vol. 62, No. 8, pp. 1457-1506, 2009.
[12] Swets, Benjamin and Christopher A. Kurby., "Eye movements reveal the Influence of event Structure on reading behavior," Cognitive Science, Vol. 40, pp. 466-480, 2016.
[13] Joshi, R. M. and Aaron, P. G., "The component model of reading: simple view of reading made a little more complex," Reading Psychology, Vol. 21, No. 2, pp. 85-97, 2000.
[14] Nation, K. and Cocksey, J., "The relationship between knowing a word and reading it aloud in children′s word reading development," Journal of Experimental Child Psychology, Vol. 103, No. 3, pp. 296-308, 2009.
[15] Liang, T-H. and Huang, Y-M., "An investigation of reading rate patterns and retrieval outcomes of elementary school students with e-books," Educational Technology and Society, Vol. 17, No 1, pp. 218-230, 2014.
[16] Carver, R. P., "Toward a theory of reading comprehension and raiding," Reading Research Quarterly, Vol. 13, No. 1, pp. 8-63, 1977.
[17] Carver, R. P., "Is reading rate constant or flexible?" Reading Research Quarterly, Vol. 18, No. 2, pp. 190-215, 1983.
[18] Fraser, C. A., "Reading rate in L1 mandarin Chinese and L2 English across five reading tasks," The Modern Language Journal, Vol. 91, No. 3, pp. 372-394, 2007.
[19] Massaro, D., Savazzi, F., Di Dio, C., Freedberg, D., Gallese, V., Gilli, G. and Marchetti, A., "When art moves the eyes: a behavioral and eye-tracking study," PloS One, Vol. 7, No. 5, 2012.
[20] Ulloa, L. C., Mora, M. C. M., Pros, R. C. and Tarrida, A. C., "News photography for Facebook: effects of images on the visual behavior of readers in three simulated newspaper formats," Information Research, Vol 20, No. 1, 2015.
[21] Kurzhals, K. and Weiskopf, D., "Space-time visual analytics of eye-tracking data for dynamic stimuli," IEEE Transactions on Visualization and Computer Graphics, Vol. 19, No. 12, 2013.
[22] Merritt, P., Hirshman, E., Wharton, W., Stangl, B., Devlin, J. and Lenz, A, " Evidence for gender differences in visual selective attention, "Personality and Individual Differences, Vol. 43, pp. 597-609, 2007.
[23] P. H. Miller, "Theories of developmental psychology," New York, NY: Worth.
[24] H. L. Schnackenberg and B. A. Burnell, "Best practices for education professionals, " NJ : Apple Academic Press, 2013.
[25] M. Perugini and R. Banse, "Personality, implicit self-concept and automaticity," European Journal of Personality, Vol. 21, No. 3. pp. 257-261, 2007.
[26] Shyi-Huey Wu, Chiao-Wen Kao, Hsiang-Pin Cheng and Bor-Jiunn Hwang "Meaningful object extraction for booking website", The Fourth International Conference on Informatics and Applications (ICIA 2015), Takamatsu, Japan, July 20-22, 2015.
[27] M.Adjouadi, A.Sesin, M.Ayala and M.Cabrerizo, "Remote eye gaze tracking system as a computer interface for persons with severe motor disability," Proceedings of the 9th International Conference on Computers Helping People with Special Needs, pp. 761-769. 2004.
[28] Z. Zhu, Q. Ji," Novel eye gaze tracking techniques under natural head movement," IEEE Transactions BiOMEdical Engineering, Vol. 54, pp. 2246-2260, 2007.
[29] F.Folkvord , D.J.Anschutz, R.W. Wiers and M. Buijzen, "The role of attentional bias in the effect of food advertising on actual food intake among children," Appetite , Vol. 84, No. 1, pp. 251-258. 2015.
[30] W.Dimpfel and A.Morys, "Quantitative objective assessment of websites by neurocode-tracking in combination with eye-Tracking," Journal of Behavioral and Brain Science, Vol. 4, pp. 384-395. 2014.
[31] F. Vicente, Z. Huang, X. Xiong, F. D. la Torre, W. Zhang and D. Levi, "Driver gaze tracking and eyes off the road detection system," IEEE Transaction Intelligent. Transportation System, Vol. 16, No. 4, pp. 2014-2027, 2015.
[32] Becker, W. and A. F. Fuchs. "Further properties of the human saccadic system: Eye movements and correction saccades with and without visual fixation points," Vision Research, Vol. 9, No. 10, pp. 1247-1258, 1969.
[33] Hallett, P. E. "Primary and secondary saccades to goals defined by instructions," Vision Research, Vol. 18. No. 10, pp. 1279-1296, 1978.
[34] J. Wang and E. Sung, "Study on eye gaze estimation," IEEE Transition Systems, Man, Cybernetics, Vol. 32, pp. 332-350, 2002.
[35] Ji, Qiang and Xiaojie Yang, "Real-time eye, gaze, and face pose tracking for monitoring driver vigilance," Real-Time Imaging, Vol. 8, No. 5, pp. 357-377, 2002.
[36] Chen, Jixu, and Qiang Ji, "A probabilistic approach to online eye gaze tracking without explicit personal calibration," IEEE Transactions on Image Processing Vol. 24, No. 3, pp. 1076-1086, 2015.
[37] Cazzato, Dario, Marco Leo and Cosimo Distante, "An investigation on the feasibility of uncalibrated and unconstrained gaze tracking for human assistive applications by using head pose estimation," Sensors, Vol. 14, No. 5, pp. 8363-8379, 2014.
[38] J.Chen and Q. Ji, "Probabilistic gaze estimation without active personal calibration," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2011.
[39] S.-W. Shih and J. Liu, "A novel approach to 3D gaze tracking using stereo cameras", IEEE Transaction Systems Man and Cybernetics, Vol. 34, No. 1, pp. 234-245, 2004.
[40] P. Nguyen, J. Fleureau, C. Chamaret and P. Guillotel, "Calibration-free gaze tracking using particle filter, "Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 15-19, 2013.
[41] F. Alnajar, T. Gevers, R. Valenti and S. Ghebreab, "Calibration-free gaze estimation using human gaze patterns," Proceedings of the IEEE International Conference Computer Vision, pp. 137-144, 2013.
[42] K. Arai and M. Yamaura, "Computer input with human eyes-only using two purkinje images which works in a real-time basis without calibration," International Journal of Human Computer Interaction, in Malaysia, Vol. 1, No. 3, pp. 71-82, 2010.
[43] C. Kwan, Calvin, J. Purnama and K. Eng, "Kinect 3D camera based eye-tracking to detect the amount of indoor advertisement viewer," IEEE International Conference Advanced Informatics: Concept, Theory and Application, pp. 123-128, 2014.
[44] S.V. Sheela and P.A. Vijaya, "An appearance based method for Eye gaze tracking," Journal of Computer Science, Vol. 7, No. 8, pp. 1194-1203, 2011.
[45] H.C. Lu, G.L. Fang, C. Wang and Y.W. Chen, "A novel method for gaze tracking by local pattern model and support vector regression," Signal Processing, Vol. 90, issue 4, pp. 1290-1299, 2010.
[46] C.W. Kao, B.J. Hwang, C.W. Yang, K.C. Fan and C.P. Huang, "A novel with low complexity gaze point estimation algorithm," Proceedings of the International MultiConference of Engineers and Computer Scientists, Vol. 1, 2012.
[47] Chiao-Wen Kao, Che-Wei Yang, Kuo-Chin Fan, B.J. Hwang and Chin-Pan Huang, “An adaptive eye gaze tracker system in the integrated cloud computing and mobile device," IEEE International Conference on Machine Learning and Cybernetics (ICM LC 2011), pp. 367-371, July 12-15, Guilin Guangxi China, 2011.
[48] Chiao-Wen Kao, Yu-Wei Chen, Che-Wei Yang, Kuo-Chin Fan, B.J. Hwang and Chin-Pan Huang, "Eye gaze tracking based on pattern voting scheme for mobile device," First International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMCCC 2011), pp. 337-340, Oct. 21-23, 2011. (EI)
[49] Wu, Yi, Jongwoo Lim, and Ming-Hsuan Yang, "Object tracking benchmark," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 37, No. 9 pp. 1834-1848, 2015.
[50] Chauhan, Abhishek Kumar, and Prashant Krishan, "Moving object tracking using Gaussian mixture model and optical flow," International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 3, No. 4, pp. 243-246, 2013.
[51] A. M. Elgammal, D. Harwood, and L. S. Davis, "Non-parametric model for background subtraction," Proceedings of the 6th European Conference on Computer Vision, pp. 751-767, 2000.
[52] B. Horn and B. Schunck, "Determining optical flow," Artificial Intelligence, Vol. 17, pp. 185-203, 1981.
[53] Li Xu, Jiaya Jia and Yasuyuki Matsushita, "Motion detail preserving optical flow estimation," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 34, No. 9, 2012.
[54] Z. Tianxu, Z. Gang, Z. Chao, L. Gaofei, C. Jianchong and L. Kuan, "A novel temporal-spatial variable scale algorithm for detecting multiple moving objects," IEEE Transactions on Aerospace and Electronic Systems, Vol 51, pp. 627-641, 2015
[55] Xiaojing Song, D. Lakmal Seneviratne and Kaspar Althoefer, "A kalman filter-integrated optical flow method for velocity sensing of mobile robots," IEEE/ASME Transactions on Mechatronics, Vol. 16, No. 3, pp. 551-563, 2011.
[56] K. Nummiaro, E. Koller-Meier and L J. Van Gool, "An adaptive color-based particle filter," Image and Vision Computing, Vol. 1, pp. 99-110, 2003.
[57] C. Yang, R. Duraiswami and L. Davis, "Fast multiple object tracking via a hierarchical particle filter," IEEE International Conference on Computer Vision, Vol. 1, pp. 212-219, 2005
[58] R. E. Kalman and R. S. Bucy, "New results in linear filtering and prediction theory," Transaction of the ASME-Journal of Basic Engineering, Vol. 83, pp. 95-107, 1961.
[59] M. Isard and A. Blake, Aug, "Condensation-conditional density propagation for visual tracking," Journal of Computer Vision, pp. 5-28, 1998.
[60] Kang, Hee-Gu and Daijin Kim, "Real-time multiple people tracking using competitive condensation," Pattern Recognition Society, pp. 1045-1058, 2004.
[61] Chang Chang and Rashid Ansari, "Kernel particle filter for visual tracking," IEEE Signal Processing Letters, Vol. 12, pp. 242-245, 2005.
[62] Y.-T. Shiao, J.-R. Zhang, and L.-S. Lan," A new video tracking method based on particle filter," Proceedings of 2009 Conference on CYC University Alliance, pp. 738-746, 2009.
[63] D. Comaniciu, and P. Meer, "Mean shift: a robust approach toward feature space analysis," IEEE Transactions of Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 603-619, 2002.
[64] T. Ojala, M. Pietikainen and D. Harwood, "Mean shift, mode seeking, and clustering," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 8, pp. 790–799, 1995.
[65] H.T. Chen, T.L. Liu, and C.S. Fuh. "Learning effective image metrics from few pairwise examples," 10th IEEE International Conference on Computer Vision, Vol. 2, pp. 1371-1378, 2005.
[66] J. A. Corrales, P. Gil, F. A. Candelas and F. Torres. "Tracking based on hue-Saturation features with a miniaturized active vision system". In Proceedings Book of 40th International Symposium on Robotics, Spain. pp. 107, 2009
[67] Xu, Richard Y D; Allen, J and Jin, J S. "Robust real-time tracking of nonrigid objects," Conferences in Research and Practice in Information Technology, VIP′03, Sydney, Australia, 2003.
[68] MLA Blascheck, Tanja and et al. "State-of-the-art of visualization for eye tracking data," Proceedings of Eurovis, 2014.
[69] Kurzhals, Kuno and et al. "Gaze stripes: Image-based visualization of eye tracking data," IEEE Transactions on Visualization and Computer Graphics, Vol. 22, No. 1, pp. 1005-1014, 2016.
[70] Andrienko, Gennady and et al. "Visual analytics methodology for eye movement studies," IEEE Transactions on Visualization and Computer Graphics, Vol. 18, No. 12, pp. 2889-2898, 2012.
[71] Kurzhals, Kuno, et al. "Eye Tracking in Computer-Based Visualization," Computing in Science and Engineering, Vol. 17, No. 5, pp. 64-71, 2015.
[72] S. Mozaffari, H. Behravan and R. Akbari, "Gender classification using single frontal image per person: combination of appearance and geometric based features," International Conference on Pattern Recognition, pp. 1192-1195, 2010.
[73] H.-C. Lian and B.-L. Lu, "Multi-view gender classification using local binary patterns and support vector machines," International Symposium on Neural Networks, pp. 202-209, 2006.
[74] C. Shan, "Learning local binary patterns for gender classification on real-world face images," Pattern Recognition Letters, Vol. 33, pp. 431-437, 2012.
[75] B. Li, X.-C. Lian, and B.-L. Lu, "Gender classification by combining clothing, hair and facial component classifiers,"Neurocomputing, Vol. 76, pp. 18-27, 2012.
[76] B. Xia, H. Sun, and B.-L. Lu, "Multi-view gender classification based on local gabor binary mapping pattern and support vector machines," International Joint Conference on Neural Networks, pp. 3388-3395, 2008.
[77] P.-H. Lee, J.-Y. Hung, and Y.-P. Hung, "Automatic gender recognition using fusion of facial strips," International Conference on Pattern Recognition, pp. 1140-1143, 2010.
[78] S. Y. D. Hu, B. Jou, A. Jaech, and M. Savvides, "Fusion of region-based representations for gender identification," International Joint Conference on Biometrics, pp. 1-7, 2011.
[79] Patel, Bhavik, R. P. Maheshwari and R. Balasubramanian, "Multi-quantized local binary patterns for facial gender classification," Computers and Electrical Engineering, Vol. 54, pp. 271-284,2016.
[80] Tapia, Juan E., Claudio A. Perez, and Kevin W. Bowyer, "Gender classification from the same iris code used for recognition," IEEE Transactions on Information Forensics and Security, Vol 11, No. 8 pp. 1760-1770, 2016.
[81] Andreu, Y., Garcia-Sevilla, P., Mollineda, R.A., "Face gender classification: a statistical study where neutral and distorted faces are combined for training and testing purposes," Image and Vision Computing, Vol 32, pp. 27-36, 2014.
[82] Mario T. Garica, Contemporary newspaper design. Englewood Cliffs, NJ: Prentice Hall, 1987.
[83] K. Holmqvist, J. Holsanova, M. Barthelson, and D. Lundqvist, "Reading or scanning a study of newspaper and net paper reading," in The Mind′s Eye: Cognitive and Applied Aspects of Eye Movement Research, R. Radach, J. Hyona, and H. Deubel, Eds. New York: Elsevier, 2003.
[84] H.H. Chen, Y.T. Yeh, C.W. Kao, B.J. Hwang, and C.P. Huang, "Through a web camera base of eye tracking technology to explore the audience’s attention preferences in terms of the positions of information and the layout compositions " First International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMECS 2012), pp. 224-228, March 14-16, Hong Kong, 2012.
[85] Rayner, K., Li, X., Juhasz, B. J., and Yan, G.,"The effect of word predictability on the eye movements of Chinese readers," Psychonomic Bulletin and Review, Vol. 12, No. 6, pp. 1089-1093, 2005.
[86] Document Object Model(DOM) – W3C Recommendation. http://www.w3c.org/DOM/
[87] S. Gupta, G. Kaiser, D. Neistadt, and P. Grimm, "DOM-based content extraction of HTML documents," In Proceedings of the Twelfth International World Wide Web Conference, WWW2003, pp. 207-214, Budapest, Hungary, May 20-24, 2003.
[88] S. Yu, D. Cai, J.R. Wen. and W.Y. Ma, "Improving pseudo-relevance feedback in web information retrieval using web page segmentation," In Proceedings of the Twelfth International World Wide Web Conference, WWW2003, pp. 11-18, Budapest, Hungary, May 20-24, 2003.
[89] J. Wang and F.H. Lochovsky, "Data-rich section extraction from HTML pages," In Proceedings of IEEE Computer Society 2002. 3rd International Conference on Web Information Systems Engineering (WISE 2002), pp. 313-322, Singapore, December 12-14, 2002.
[90] L. Yi, B. Liu, and X. Li, "Eliminating noisy information in web pages for data mining," In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2003), pp. 296-305, Washington, DC, USA, August 24 - 27, 2003.
[91] Ma, Yu-Fei, et al., "A generic framework of user attention model and its application in video summarization," IEEE Transactions on Multimedia, Vol. 7, No. 5, pp. 907-919, 2005.
[92] Xu, Jia, et al., "Gaze-enabled egocentric video summarization via constrained submodular maximization," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2235-2244, 2015.
[93] Lee, Yong Jae and Kristen Grauman., "Predicting important objects for egocentric video summarization," International Journal of Computer Vision, Vol. 114, pp. 38-55, 2015.
[94] Ji, Qing-Ge, et al., "Video abstraction based on the visual attention model and online clustering," Signal Processing: Image Communication, Vol. 28, pp. 241-253, 2013.
[95] Song, Guang-Hua, et al., "A novel video abstraction method based on fast clustering of the regions of interest in key frames," AEU-International Journal of Electronics and Communications, Vol. 68, pp. 783-794, 2014.
[96] Strasburger, Hans, Ingo Rentschler, and Martin Juttner, "Peripheral vision and pattern recognition: a review," Journal of vision, Vol. 11, Issue.5, 2011.
[97] S. Milborrow, "Locating facial features with active shape models," Master’s Thesis, University of Cape Town, 2007.
[98] S. Milborrow and F. Nicolls, "Locating facial features with an extended active shape model," European Conference on Computer Vision, pp. 504-513, 2008.
[99] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, "Active shape models - their training and application," Computer Vision and Image Understanding, Vol. 61, pp. 38-59, 1995.
[100] Jolliffe, Ian.Principal component analysis. John Wiley and Sons, Ltd, 2002.
[101] Dalal, Navneet and Bill Triggs, "Histograms of oriented gradients for human detection,"Computer Vision and Pattern Recognition, Vol. 1. pp. 886-893, 2005.
[102] Glassner AS, Graphics Gems I. San Francisco: Morgan Kaufmann Publishers Inc.
[103] Liaw, Andy and Matthew Wiener, "Classification and regression by random forest," R News, Vol 2, pp. 18-22, 2002.
[104] Hearst, Marti A., et al. "Support vector machines," IEEE Intelligent Systems and their Applications, Vol 13, pp. 18-28, 1998.
[105] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss, "The feret database and evaluation procedure for face-recognition algorithms," Image and Vision Computing, Vol. 16., pp. 295-306, 1998.
[106] Martinez and R. Benavente, "The AR face database," CVC Technologies Report, No. 24, 1998.
指導教授 范國清 陳惠惠(Kuo-Chin Fan Hui-Hui Chen) 審核日期 2018-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明