dc.description.abstract | Independent Component Analysis (ICA) is widely used as one of the
methods for analyzing resting-state functional magnetic resonance imaging
(rsfMRI) data in various research studies. However, the component maps
generated by ICA do not solely originate from brain activation but are often
influenced by instrument noise, head motion, or cardiac activity. To distinguish
brain-activated component maps from non-brain-activated ones, manual
inspection is commonly employed. However, with the advancement of technology,
there is a need for objective discrimination methods. Therefore, in this
experiment, we aimed to utilize the architecture of one of the commonly used
Convolutional Neural Networks (CNN) models, VGG, as a classification model.
Through supervised learning, we trained the VGG model to best sort out the
brain activation independent components from a large number of component
maps.
In this work, we conducted tests on four crucial parameters for constructing
the models, including the number of epochs, the number of model layers,
learning rate, and convolutional kernel size, to determine the optimal settings for
achieving the best model performance. The tested images were constructed by
combining the four views (left and right lateral views and left and right medial
views) of the component maps, spatially normalized and overlaid on the inflated
Montreal Neurological Institute (MNI) standard brain. In addition, due to
hardware limitations, we had to reduce the resolution of the tested images of the
component maps. As a result, we trained the model to differentiate tested images
with two different resolution, 180x180 and 50x50 from the original 520x370, and
examine the model performance, respectively.
The data used in this experiment were obtained as secondary usage from
previously conducted experiments in the laboratory. Component maps from
resting-state fMRI scans of 10 healthy subjects, collected over a 6-minute period,
were preprocessed, classified, and utilized for training the model. The results
show that, under optimized model parameters, the VGG model trained with
180x180 resolution significantly outperforms the one trained with 50x50
resolution in terms of Test AUC. Additionally, when we magnify the misclassified
component maps, we observe feature loss and blurriness in both the 50x50 and
180x180 maps, with the 50x50 resolution exhibiting more severe issues. This
indicates that reducing image resolution does affect the model′s judgment,
suggesting that, whenever possible within the constraints of hardware resources,
inputting the complete images would optimize the model′s performance. | en_US |