在真實世界的長尾數據集上訓練深度神經網路是一項極具挑戰性的任務,因為資料不平衡會嚴重損害模型在稀有尾部類別上的表現 。本論文旨在解決分類準確度與模型大小之間的關鍵權衡問題 。我們利用多樣化的輔助訓練器 (Auxiliary Trainers) 來生成多個候選遮罩,採用非均勻分層剪枝策略 (Layered Pruning Strategy) 以保留深層網路中的關鍵參數,並透過加權遮罩合併 (Weighted Mask Merging) 來整合出最佳的剪枝決策 。實驗結果顯示,在 CIFAR-100-LT 與 ImageNet-LT 數據集上,我們的模型在分別經過 30% 與 35% 的剪枝後,其尾部類別的準確率依然優於未經壓縮的完整基線模型 。我們的研究確立了一個策略性設計的剪枝流程不僅是模型壓縮的工具,更是一種能有效提升模型在尾部類別上泛化能力的機制,從而打造出在真實世界場景中兼具效率與準確性的模型 。 ;Training deep neural networks on long-tailed real-world datasets is challenging, as data imbalance harms performance in rare tail classes. This thesis addresses the critical trade-off between classification accuracy and model size. We leverage diverse Auxiliary Trainers to generate multiple candidate masks, apply a non-uniform Layer Pruning Strategy to preserve critical parameters in deeper layers, and consolidate the best pruning decisions via Weighted Mask Merging. On CIFAR-100-LT and ImageNet-LT, our models, even when heavily pruned to 30\% and 35\% sparsity, respectively, maintain superior tail class accuracy over their full-sized baselines. Our work establishes that a strategically designed pruning process is not merely a tool for model compression but a potent mechanism for enhancing generalization on tail classes, yielding models that are simultaneously more efficient and accurate for real-world scenarios.