隨著科技進步,行動裝置上眾多應用程式的普及對人類生活帶來了便利,但也成為了惡意攻擊的目標,尤其是佔有70%市場份額的Android。目前研究人員透過人工智慧,特別是利用函式呼叫圖(Function Call Graph, FCG),已經在惡意應用檢測上取得了顯著成效。但攻擊者不斷開發新的對抗手段,對抗式攻擊(Adversarial Attack)就是其中一種策略,它通過微小的修改將原始APK製作成對抗式樣本,使檢測模型判斷錯誤。根據先前的研究,原先檢測率達到90%以上的模型,受到這樣的攻擊時,對抗式樣本的檢測率為0%,這樣的攻擊方法帶非常嚴重的危害。目前已有幾種解決方法,其中對抗式訓練是一種常見的防禦方法,其可以有效的提升檢測對抗式樣本的能力,但會使檢測模型的準確率下降。 本研究將結合解釋性技術(Explainabel AI, XAI)製作對抗式樣本,利用解釋性技術提取模型判斷的特徵重要度排名,以此作為擾動位置的依據修改FCG,通過改變程式結構誤導檢測模型判斷。從模型的角度來看XAI生成的對抗式樣本是針對模型的弱點進行攻擊,利用這些對抗式樣本強化模型的檢測能力,使模型更專注於FCG中的惡意行為,以此來維持模型的準確率。 本研究所提出之方法一般模型訓練時可以達到94%的F1-Score,經過對抗式訓練後可以達到91%的F1-Score。其中對抗式訓練模型可以有效抵禦對抗式攻擊。 ;As technology advances, the widespread use of mobile apps has made life easier but also a target for malicious attacks, notably on Android, which holds a 70% market share. AI, especially Function Call Graphs (FCG), has significantly improved malware detection. However, attackers develop new methods, such as adversarial attacks that slightly modify original APKs to fool detection models, drastically reducing their effectiveness. Current solutions include adversarial training, which while effective, decreases model accuracy. This study employs explainable AI (XAI) to create adversarial samples and uses it to identify and manipulate key features in FCGs, thereby fooling detection models. This approach targets model vulnerabilities, enhancing detection focus on actual malicious activities and maintaining accuracy. Our method achieves an F1-Score of 94% normally and 91% post-adversarial training, effectively countering adversarial attacks.