在基於局部特徵的少樣本影像分類任務中,模型常因區域類別相似度高而導致分類錯誤。因此,我們提出一種整合目標導向區域強化機制的少樣本分類架構 TAP-Net。該方法結合 YOLO 偵測模型,定位圖片中的目標區域,並設計兩項模組:目標導向區塊重加權(Target-Guided Patch Reweighting)機制與目標導向特徵增強機制(Target-Guided Feature Enhancement),分別強化模型對重要區域的權重與特徵表示。此外,本研究亦提出一種目標導向的資料增強策略,透過模糊非目標區域以減少背景資訊,進一步提升模型泛化能力。 在 miniImageNet 資料集上的實驗結果顯示,我們的方法在 5-way 1-shot 與 5-way 5-shot 任務下的分類準確率分別為 71.31% 與 84.99%,皆優於原始 FewTURE 架構,證明本研究所提出方法能有效提升模型的泛化能力,為少樣本分類任務提供了一種新的解決方法。 ;In patch-based few-shot image classification, models often suffer from misclassification due to high inter-class similarity within local regions. To address this challenge, we propose TAP-Net, a novel few-shot classification framework that incorporates a target-guided region enhancement mechanism. Our method leverages the YOLO detection model to localize target regions within images and introduces two key modules: Target-Guided Patch Reweighting, which emphasizes the importance of semantically relevant patches, and Target-Guided Feature Enhancement, which strengthens the representation of crucial areas by enriching their contextual features. Additionally, we propose a target-guided data augmentation strategy that reduces background interference by applying Gaussian blur to non-target regions, thereby improving the model’s generalization capability. Experimental results on the miniImageNet dataset demonstrate that TAP-Net achieves classification accuracies of 71.31% for the 5-way 1-shot task and 84.99% for the 5-way 5-shot task—both surpassing the performance of the original FewTURE framework. These findings validate the effectiveness of our proposed approach in enhancing feature discriminability and model generalization, offering a new direction for few-shot classification research.