深度學習已經被證明能夠準確地診斷注意力不足過動症(ADHD),但是深度學習一直因為其黑盒子般的不可解釋性而不被信任。幸運的是,隨著可解釋人工智慧的蓬勃發展,這個問題有了解決方案。本研究採用了基於虛擬現實的GO/NOGO 任務,並在其中加入了干擾。在任務期間,會蒐集受測者的眼球追蹤、頭部移動、以及腦電資料。這些資料會被用來訓練一個可解釋多模態融合模型,這個模型除了可以分類患有ADHD的孩童以及正常孩童,也可以產出具有解釋性的熱圖。這個熱圖能夠顯示在腦電資料中特定變量和時間戳的重要性,從而幫助我們分析由模型獲取的型態。從我們對於熱圖的觀察可以發現,模型標示出了常用於分析事件相關電位成分的時間區段。熱圖也表現出干擾對於GO事件和NOGO事件以及ADHD孩童和正常孩童之間的影響是不同的。;Deep learning has been proved to diagnose Attention Deficit/Hyperactivity Disorder (ADHD) accurately, but it has raised concerns about trustworthiness because of the lack of explainability. Fortunately, the development of explainable artificial intelligence (XAI) offers a solution to this problem. In this study, we employed a VR-based GO/NOGO task with distractions, capturing participants′ eye movement, head movement, and electroencephalography (EEG) data. We used the collected data to train an explainable multimodal fusion model. Besides classifying between ADHD and normal children, the proposed model also generates explanation heatmaps. The heatmaps provide the importance of specific variables and timestamps in the EEG data to help us analyze the patterns captured by the model. According to our observations, the model identified specific time intervals that related to specific event-related potentials (ERPs) components. The heatmaps also demonstrate that the impacts of distractions vary between not only the GO and NOGO events but also ADHD and normal children.