博碩士論文 111522147 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:66 、訪客IP:3.15.145.111
姓名 李泓磊(Hung-Lei Lee)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於策略性操作增強圖像檢索系統之安全性以對抗後門攻擊
(Enhancing Image Retrieval Security Against Backdoor Attacks Through Strategic Manipulations)
相關論文
★ Single and Multi-Label Environmental Sound Recognition with Gaussian Process★ 波束形成與音訊前處理之嵌入式系統實現
★ 語音合成及語者轉換之應用與設計★ 基於語意之輿情分析系統
★ 高品質口述系統之設計與應用★ 深度學習及加速強健特徵之CT影像跟骨骨折辨識及偵測
★ 基於風格向量空間之個性化協同過濾服裝推薦系統★ RetinaNet應用於人臉偵測
★ 金融商品走勢預測★ 整合深度學習方法預測年齡以及衰老基因之研究
★ 漢語之端到端語音合成研究★ 基於 ARM 架構上的 ORB-SLAM2 的應用與改進
★ 基於深度學習之指數股票型基金趨勢預測★ 探討財經新聞與金融趨勢的相關性
★ 基於卷積神經網路的情緒語音分析★ 運用深度學習方法預測阿茲海默症惡化與腦中風手術存活
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 本研究提出了一種新穎的圖像檢索模型防禦機制,透過針對性的圖像轉換有效降低與後門攻擊相關的風險。利用如移除像素及水平翻轉等操作,並根據RISE技術生成的顯著圖動態調整策略,我們的方法破壞了嵌入圖像中的潛在觸發器。這些設置經過大量測試,以確保它們在不影響系統功能的情況下保持乾淨樣本的準確率。實驗結果表明,我們的防禦不僅優於傳統方法,還有效地對抗了先進的圖像檢索後門攻擊,大幅提升了圖像檢索系統的安全性。這種方法能夠讓圖像檢索系統運作不受影響,在正常操作條件下保持高精準度和功能性,並在不需要大規模重新訓練模型或更改系統設計的情況下有效地消除威脅。
摘要(英) This research introduces a novel defense mechanism for image retrieval models that effectively mitigates risks associated with backdoor attacks through targeted image transformations. By utilizing strategic techniques such as the removal of lines or columns of pixels and horizontal flipping, and dynamically adjusting transformations based on saliency maps generated by the RISE technique, our method disrupts potential triggers embedded within the images. These adaptations are refined through extensive testing to ensure they maintain the Mean Average Precision (MAP) of clean samples without adversely affecting system functionality. Experimental results demonstrate that our defense not only outperforms traditional methods but also effectively counteracts advanced image retrieval backdoor attacks, significantly enhancing the security of image retrieval systems. This approach allows the image retrieval system to operate efficiently, preserving high accuracy and functionality under normal operating conditions, and effectively neutralizing threats without extensive retraining or system redesign.
關鍵字(中) ★ 後門攻擊
★ 資訊安全
★ 深度學習
★ 圖像檢索
關鍵字(英) ★ Backdoor attacks
★ Security
★ Deep learning
★ Image retrieval
論文目次 摘要i
Abstract ii
Contents iii
List of Figures v
List of Tables vi
1 Introduction 1
2 Related Work 4
2.1 Backdoor Attacks 4
2.2 Deep Hashing for Image Retrieval 6
2.3 Backdoor Defenses 9
3 Background 11
3.1 Stirmark Attack 11
3.2 RISE 12
4 Methodology 14
4.1 Backdoor Model and Defense Requirements 14
4.2 Defense Overview 15
4.3 Dynamic Removal Strategy 18
5 Evaluation 21
5.1 Implementation Details 21
5.2 StirMark Attack Effectiveness Evaluation 21
5.3 Dynamic Removal Evaluation 24
5.4 Comparison with Other Defenses 26
5.5 Adversarial Training 27
5.6 Multi-location Trigger 28
6 Conclusion 30
7 Appendix 36
7.1 Other StirMark Attack 36
參考文獻 [1] Chen, Xinyun, Liu, Chang, Li, Bo, Lu, Kimberly, and Song, Dawn. “Targeted backdoor
attacks on deep learning systems using data poisoning.” arXiv preprint arXiv:1712.05526,
2017.
[2] Qiu, Han, et al. “Deepsweep: An evaluation framework for mitigating DNN backdoor attacks
using data augmentation.” Proceedings of the 2021 ACM Asia Conference on Computer
and Communications Security, 2021.
[3] Li, Yiming, et al. “Backdoor attack in the physical world.” arXiv preprint
arXiv:2104.02361 (workshop paper at ICLR 2021).
[4] Nguyen, Tuan Anh, and Anh Tran. “Input-aware dynamic backdoor attack.” Advances in
Neural Information Processing Systems, 33: 3454-3464, 2020.
[5] Gao, Kuofeng, et al. “Backdoor Attack on Hash-based Image Retrieval via Clean-label
Data Poisoning.” arXiv preprint arXiv:2109.08868v3 (BMVC 2023).
[6] Shi, Yucheng, et al. “Black-box Backdoor Defense via Zero-shot Image Purification.” Advances
in Neural Information Processing Systems, 36 (NeurIPS 2023).
[7] Zhou, Jiachen, et al. “DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks
via Diffusion Models.” Proceedings of the AAAI Conference on Artificial Intelligence, Vol.
38, No. 19, 2024.
[8] Sun, Tao, et al. “Mask and restore: Blind backdoor defense at test time with masked autoencoder.”
arXiv preprint arXiv:2303.15564 (2023).
[9] Hu, Shengshan, et al. “Badhash: Invisible backdoor attacks against deep hashing with
clean label.” Proceedings of the 30th ACM international conference on Multimedia, 2022.
[10] Xia, P., Li, Z., Zhang, W., and Li, B. “Data-efficient backdoor attacks.” arXiv preprint
arXiv:2204.12281, 2022.
[11] May, Brandon B., et al. “Salient conditional diffusion for backdoors.” ICLR 2023 Workshop
on Backdoor Attacks and Defenses in Machine Learning, 2023.
[12] Bai, Jiawang, et al. “Targeted attack for deep hashing based retrieval.” Computer Vision–
ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings,
Part I, Vol. 16. Springer International Publishing, 2020.
[13] Petitcolas, Fabien A. P., Anderson, Ross J., Kuhn, Markus G. “Attacks on copyright marking
systems.” In David Aucsmith (Ed.), Information Hiding, Second International Workshop,
IH’98, Portland, Oregon, U.S.A., April 15-17, 1998, Proceedings, LNCS 1525,
Springer-Verlag, ISBN 3-540-65386-4, pp. 219-239.
[14] Petitcolas, Fabien A. P. “Watermarking schemes evaluation.” IEEE Signal Processing, vol.
17, no. 5, pp. 58–64, September 2000.
[15] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,”
in Proceedings of the IEEE conference on computer vision and pattern recognition,
2015.
[16] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: Learning Affordance for
Direct Perception in Autonomous Driving,” in Proceedings of the IEEE/CVF International
Conference on Computer Vision, 2015, pp. 2722–2730.
[17] Zeng, Y., Park, W., Mao, Z. M., & Jia, R. “Rethinking the backdoor attacks’ triggers: A
frequency perspective.” Proceedings of the IEEE/CVF International Conference on Computer
Vision, 2021, pp. 16473-16481.
[18] Li, Yiming, et al. “Backdoor learning: A survey.” IEEE Transactions on Neural Networks
and Learning Systems, 2022.
[19] Gu, Tianyu, Dolan-Gavitt, Brendan, and Garg, Siddharth. “Badnets: Identifying vulnerabilities
in the machine learning model supply chain.” arXiv preprint arXiv:1708.06733,
2017.
[20] Tran, Brandon, Li, Jerry, and Madry, Aleksander. “Spectral signatures in backdoor attacks.”
Advances in Neural Information Processing Systems, 31, 2018.
[21] Jiang, Wenbo, et al. “Color backdoor: A robust poisoning attack in color space.” Proceedings
of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
[22] Liu, Yunfei, et al. “Reflection backdoor: A natural backdoor attack on deep neural networks.”
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August
23–28, 2020, Proceedings, Part X, Vol. 16. Springer International Publishing, 2020.
[23] Han, X., Xu, G., Zhou, Y., Yang, X., Li, J., & Zhang, T. “Physical backdoor attacks to lane
detection systems in autonomous driving.” In Proceedings of the 30th ACM International
Conference on Multimedia, pp. 2957-2968, October 2022.
[24] Turner, A., Tsipras, D., & Madry, A. “Label-consistent backdoor attacks.” arXiv preprint
arXiv:1912.02771, 2019.
[25] Shumailov, I., Shumaylov, Z., Kazhdan, D., Zhao, Y., Papernot, N., Erdogdu, M. A., &
Anderson, R. J. “Manipulating SGD with data ordering attacks.” Advances in Neural Information
Processing Systems, 34, pp. 18021-18032, 2021.
[26] Nguyen, A., & Tran, A. “Wanet–imperceptible warping-based backdoor attack.” arXiv
preprint arXiv:2102.10369, 2021.
[27] Liu, K., Dolan-Gavitt, B., & Garg, S. “Fine-pruning: Defending against backdooring attacks
on deep neural networks.” In International Symposium on Research in Attacks, Intrusions,
and Defenses, pp. 273-294, September 2018, Cham: Springer International Publishing.
[28] Zeng, Y., Chen, S., Park, W., Mao, Z. M., Jin, M., & Jia, R. “Adversarial unlearning of
backdoors via implicit hypergradient.” arXiv preprint arXiv:2110.03735, 2021.
[29] Gao, Y., Xu, C., Wang, D., Chen, S., Ranasinghe, D. C., & Nepal, S. “STRIP: A defence
against trojan attacks on deep neural networks.” In Proceedings of the 35th Annual Computer
Security Applications Conference, pp. 113-125, December 2019.
[30] Petsiuk, V., Das, A., & Saenko, K. “RISE: Randomized input sampling for explanation of
black-box models.” arXiv preprint arXiv:1806.07421, 2018.
[31] Li, Yige, Lyu, Xixiang, et al. “Anti-backdoor learning: Training clean models on poisoned
data.” Proceedings of the 35th Conference on Neural Information Processing Systems
(NeurIPS), 2021.
[32] Du, Min, Jia, Ruoxi, and Song, Dawn. “Robust anomaly detection and backdoor attack detection
via differential privacy.” Proceedings of the International Conference on Learning
Representations (ICLR), 2020.
[33] Guo, W., Tondi, B., & Barni, M. “An overview of backdoor attacks against deep neural
networks and possible defences.” IEEE Open Journal of Signal Processing, 3, pp. 261-287,
2022.
[34] Turner, A., Tsipras, D., & Madry, A. “Label-consistent backdoor attacks.” arXiv preprint
arXiv:1912.02771, 2019.
[35] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. “Imagenet: A large-scale
hierarchical image database.” In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), pp. 248-255, 2009.
35
指導教授 王家慶 審核日期 2024-9-11
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明