English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41636502      線上人數 : 1131
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77655


    題名: 凝視訊息視覺化與分析;Gaze Information Visualization and Analysis
    作者: 高巧汶;Kao, Chiao-Wen
    貢獻者: 資訊工程學系
    關鍵詞: 凝視訊息;動態素材;Note Video;視覺行為;性別辨識;gaze information;dynamic stimuli;Note Video;visual behavior;gender classification
    日期: 2018-07-26
    上傳時間: 2018-08-31 14:51:51 (UTC+8)
    出版者: 國立中央大學
    摘要: 感官是接收外界訊息重要來源之一,採用凝視訊息(gaze information)為基礎,以了解人類的視覺行為(visual behavior)及認知行為(cognitive behavior)一種主要且有效的方法。由於數位資料的多樣化,必須根據不同素材的特性,研究提出不同的視覺對映方法,才能取得閱讀者真實的感興趣內容。視覺化是一種實體化視覺行為的方法,有助讓研究者更清楚地了解凝視訊息及觀看內容之間的特性或關係。因此,針對閱讀者觀看不同的素材取得之凝視訊息加以視覺化與分析,是一個具有吸引力的研究議題。
    在本論文中,素材分為三類,包含:靜態(static)、階段性靜態(instant static)及動態(dynamic)。除靜態素材,僅能基於視覺密度資訊,利用一般常見的視覺化方式,呈現感興趣區域(area of interest)、視覺落點熱區(heat map)或觀看順序等靜態統計資料外;為了非僅侷限於視覺化統計資料,在階段性靜態素材方面,為在具人為操控特性的網頁素材下,取得閱聽者關注物件內容,本論文提出了有意義物件內容擷取方法;在動態素材方面,則提出一種新的視覺行為視覺化模型,稱為Note Video,以閱讀者在觀看影片過程中,所產生的關注物件為主軸,利用短片式的方式呈現。為達成此目標,本論文提出以視覺量測指標為基礎之自動化關注物件追蹤法,清楚且準確地呈現閱讀者觀看動態素材的視覺行為。
    而除了客觀的因素(objective factors),如不同素材對於視覺行為具有影響外,另外人的主觀因素(subjective factors),如興趣或性別,也會影響凝視行為。而為減少在資料收集過程中人為介入的因素,本論文提出以人臉五官為特徵之適應型性別辨識。因此,本論文除探討不同素材的因素外,也加入探討性別對於凝視行為的影響。
    ;The visual is one of the important perceptions that human assimilates information from their surroundings. Through analyzing gaze information is a way to explore human visual behaviors and cognitive behaviors. In order to explicitly obtain the content of interested to the audience, the various of gaze mapping functions need to be proposed in accordance with the characteristics of different materials with the diversification of digital information. Visualization is an effective method to concrete the abstract visual behavior. It can help researchers better understand the characteristics or relationships between gaze information and the attended content. Therefore, it could be an attractive issue to study visualization with gaze analysis information in different stimulus conditions for understanding the cognitive behaviors.
    These stimulus conditions are classified into three categories, including static, instant static and dynamic stimulus in this dissertation. For the static stimulus, the general visualization patterns, such as areas of interest, hot spots, or foci trajectory, can only be displayed based on the gaze density information. For not only limited to visualizing the statistical data, this dissertation proposes a method for extracting the content of meaningful objects under the instant static stimulus condition. The advantage of this method is the capability to obtain the content of focused object in the web-based, user-controlled, stimulus environment. In terms of dynamic stimulus condition, a new visualization pattern is proposed, called as Note Video, using numerous mini episodes to represent the visual behaviors. The content of these episodes is associated to the focused object. To achieve these mini episodes, the gaze-based automatic focused object tracking (AFOT) method is proposed to clearly and accurately present the visual behaviors while the audience watching the video.
    In addition to the objective factors, these stimuli, that influence visual behaviors, the subjective factors, such as gender or interest, could also be examined. For reducing the human intervention during the data collection, the gender classification based on the facial component is proposed to detect the gender of the audience. Consequently, this dissertation discusses factors that influence visual behaviors, not merely the various stimuli but also the gender as well.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML68檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明