基於區塊鏈的聯邦學習 (BCFL) 系統為協作模型訓練提供了一個強大且保護隱私的範例,但它們容易受到複雜的對抗性攻擊,例如梯度模仿攻擊,其中惡意客戶端試圖模仿誠實客戶端的行為。我們的研究深入探討了攻擊者如何在 FoolsGold 和 Multi-Krum 演算法的保護下影響 BCFL 環境中的全域模型效能。在本論文中,我們提出了ShardedGold,一種基於梯度分片架構的防禦架構和FoolsGold演算法的增強版本,結合了基於順序的懲罰分數和一致性分數,以減輕攻擊者的影響。此外,我們的方法利用星際檔案系統 (IPFS) 並利用內容識別碼 (CID) 來克服區塊鏈上的儲存限制。我們的實驗結果表明,ShardedGold 有效地減輕了攻擊對全局模型的影響,並識別了系統中的潛在攻擊者。;Blockchain-based Federated Learning (BCFL) systems provide a robust and privacy-preserving paradigm for collaborative model training, but they are vulnerable to sophisticated adversarial attacks, such as gradient mimicry attack, where malicious clients try to imitate the behavior of honest clients. Our study delves into how attackers can affect the global model performance in a BCFL environment under the protection of the FoolsGold and Multi-Krum algorithms. In this thesis, we propose ShardedGold, a defense architecture based on a gradient sharding architecture and an enhanced version of the FoolsGold algorithm incorporating the Order-based penalty score and Consistency score to mitigate the attackers′ impact. Moreover, our method leverages the InterPlanetary File System (IPFS) and utilizes Content Identifiers (CIDs) to overcome the storage limitations on the blockchain. Our experimental results show that ShardedGold effectively mitigates the impact of attacks on the global model and identifies potential attackers in the system.