隨著對深度學習模型在小數據集上高效表現需求的提升,小樣本學習( fewshot learning)逐漸成為一個熱門研究領域。其目標是在每個類別只有少量標註樣本的情況下訓練模型,並根據測試數據的處理方式分為歸納式( inductive)與轉導式( transductive)方法。本研究提出了一種基於 Transformer 架構的歸納式小樣本學習模型——DAPNet。該模型結合了密集網路( Dense Networks)與多頭注意力機制( Multi-Head Attention),並改進了激活函數,實現了 Ranger 優化器的應用,有效提升了準確性和訓練效率。我們在MiniImageNet和TieredImageNet 這兩個知名的小樣本學習基準數據集上對 DAPNet 進行了評估。結果顯示, DAPNet 在準確性方面優於或媲美當前的先進模型。;With the growing demand for deep learning models to excel on limited datasets, few-shot learning has gained prominence as a promising area of research. Its goal is to train models using only a few labeled examples per class. Depending on how test data is processed, few-shot learning methods are classified into inductive and transductive approaches. In this work, we present DAPNet, an inductive fewshot learning model based on the Transformer architecture. Our model incorporates Dense Networks and Multi-Head Attention, alongside modifications to the activation function and the implementation of the Ranger optimizer, which lead to enhanced accuracy and training efficiency. We evaluate DAPNet on two widely recognized benchmark datasets for few-shot learning: MiniImageNet and TieredImageNet. The experimental results show that DAPNet delivers outstanding performance, either exceeding or matching the accuracy of state-of-the-art models.