本研究將利用深度學習法,從不同技能水平的作家所寫的書法作品和四位世界知名藝術家的畫作中提取基本的空間特徵,以區分畫家的風格和作家的技能水平。選擇這四位藝術家是因為他們的風格包括我們想要測試的關鍵特徵,即筆觸(小空間尺度特徵)和聚光燈效果(大空間尺度特徵)。這兩個不同級別的空間特徵可以由卷積神經網絡的不同隱藏層的學習單元表示。訓練後CNN模型所提取的空間特徵將進一步與來自同時EEG /眼動儀和fMRI /眼動儀實驗的眼動結果進行比較,以查看訓練的空間特徵是否與眼動儀數據中的凝視中心位於同一位置,並進一步用於區分作者的技能水平以及藝術家的風格。然後,眼動數據中的凝視中心將進一步用於導引EEG和fMRI的數據分析,以找到凝視相關的腦部活動。我們假設在這個階段,主要找出的大腦活動應與視覺處理過程相關。因此,我們可以在處理這些藝術作品的視覺特徵時,找到人工神經網絡和人類神經網絡之間的對應關係。最後,我們將使用EEG的鎖相值和部分有向同調性,以及fMRI的Granger因果關係來估計與凝視相關大腦活動的腦網絡。因此,我們應該能夠描繪與藝術作品分類工作相關的更高水平的大腦活動。這些更高級別的大腦激活可能是在欣賞視覺藝術品時將人類神經網絡與人工神經網絡分開的關鍵組件。 ;This proposal will utilize the capability of the deep neural network to extract the essential spatial features from the artworks of calligraphy written by the writers with different skill levels and the paintings by four world-renowned artists for differentiating the painters’ styles and the writers’ skill levels. The four artists are selected because of their styles including the key characteristics we would like to pinpoint, namely the brush stroke (as a feature at small spatial scale) and spotlight effect (as a feature at large spatial scale). Such two different levels of spatial features may be represented by the learned kernels at different hidden layers of a convolution neural network (CNN) that represent different spatial scale due to the operation of the CNN model. Such spatial features in the trained CNN model will be further compared to the eye tracker results from the simultaneous EEG/eye tracker and fMRI/eye tracker experiment to see if the trained spatial features are co-located with the gaze centers in the eye tracker data for differentiating the writers’ skill levels as well as the styles of the artists done by the human participants. Then, the gaze centers in the eye tracker data will further be used to guide the data analysis of both the EEG and fMRI data to find the gaze-related brain activations. We hypothesize that the visual process-related brain areas/circuits should be mostly highlighted at this stage of the data analysis. Therefore, we may be able to find the correspondence between the artificial neural network and the human neural network in processing the visual features from the artworks, which facilitate the final differentiation work. Finally, the brain networks initiated from the gaze-related brain activations will be further estimated using the phase-locking value and partial directed coherence in the EEG data and the Granger causality in the fMRI data. As a result, we should be able to delineate the higher level brain activity associated with the differentiation work. These higher level brain activation may be the key components that set apart the human neural network from the artificial neural network in appreciating the visual artworks.