本篇論文針對圖像檢索(Image retrieval )的任務上提出了一個新的損失函數。此方法基於Proxy_NCA以及Proxy_Anchor的方法上加上了多個代表點的方法,來提升樣本的豐富性。使得Batch size減少的情況下也能達到跟原來較大的batch size一樣的效果。並且使用SoftMax函數對類內代表點做加權。使得重要的代表點能得到更多的學習資源。除此損失函數地改良之外,也對現有的ResNet50進行了修改,只使用RestNet50的前三層做為特徵擷取,取消了ResNet50第三層的下採樣。並且加入了Attention機制取代原本ResNet50的第四層。Attention使用了SoftPlus函數對特徵圖的特徵做加權。使得重要的特徵能更明顯,不重要的特徵減少關注度。 相較於傳統Attention使用SoftMax函數能得到更好的效果。不管是新提出的損失函數,或是改良過後的ResNet50都相較於原始方法Recall@1都有很大的提升。;In this paper, we propose a new loss function for Image Retrieval task. The new loss function makes an improvement based on Proxy-NCA and Proxy-Anchor Loss by adopting multiple proxies, to promote positive sample variety. Its shows better performance than Proxy-Anchor Loss even in the small batch size. Besides, we weighted intra-class proxy by SoftMax function to make important samples receive a higher gradient while training. In addition, we make some changes on ResNet50 by only using the first three-layer and adding a new attention module by using SoftPlus function to replace SoftMax. Finally, we obtain well results on recall@1 via our new method.