參考文獻 |
[1] Chen, Xinyun, Liu, Chang, Li, Bo, Lu, Kimberly, and Song, Dawn. “Targeted backdoor
attacks on deep learning systems using data poisoning.” arXiv preprint arXiv:1712.05526,
2017.
[2] Qiu, Han, et al. “Deepsweep: An evaluation framework for mitigating DNN backdoor attacks
using data augmentation.” Proceedings of the 2021 ACM Asia Conference on Computer
and Communications Security, 2021.
[3] Li, Yiming, et al. “Backdoor attack in the physical world.” arXiv preprint
arXiv:2104.02361 (workshop paper at ICLR 2021).
[4] Nguyen, Tuan Anh, and Anh Tran. “Input-aware dynamic backdoor attack.” Advances in
Neural Information Processing Systems, 33: 3454-3464, 2020.
[5] Gao, Kuofeng, et al. “Backdoor Attack on Hash-based Image Retrieval via Clean-label
Data Poisoning.” arXiv preprint arXiv:2109.08868v3 (BMVC 2023).
[6] Shi, Yucheng, et al. “Black-box Backdoor Defense via Zero-shot Image Purification.” Advances
in Neural Information Processing Systems, 36 (NeurIPS 2023).
[7] Zhou, Jiachen, et al. “DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks
via Diffusion Models.” Proceedings of the AAAI Conference on Artificial Intelligence, Vol.
38, No. 19, 2024.
[8] Sun, Tao, et al. “Mask and restore: Blind backdoor defense at test time with masked autoencoder.”
arXiv preprint arXiv:2303.15564 (2023).
[9] Hu, Shengshan, et al. “Badhash: Invisible backdoor attacks against deep hashing with
clean label.” Proceedings of the 30th ACM international conference on Multimedia, 2022.
[10] Xia, P., Li, Z., Zhang, W., and Li, B. “Data-efficient backdoor attacks.” arXiv preprint
arXiv:2204.12281, 2022.
[11] May, Brandon B., et al. “Salient conditional diffusion for backdoors.” ICLR 2023 Workshop
on Backdoor Attacks and Defenses in Machine Learning, 2023.
[12] Bai, Jiawang, et al. “Targeted attack for deep hashing based retrieval.” Computer Vision–
ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings,
Part I, Vol. 16. Springer International Publishing, 2020.
[13] Petitcolas, Fabien A. P., Anderson, Ross J., Kuhn, Markus G. “Attacks on copyright marking
systems.” In David Aucsmith (Ed.), Information Hiding, Second International Workshop,
IH’98, Portland, Oregon, U.S.A., April 15-17, 1998, Proceedings, LNCS 1525,
Springer-Verlag, ISBN 3-540-65386-4, pp. 219-239.
[14] Petitcolas, Fabien A. P. “Watermarking schemes evaluation.” IEEE Signal Processing, vol.
17, no. 5, pp. 58–64, September 2000.
[15] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,”
in Proceedings of the IEEE conference on computer vision and pattern recognition,
2015.
[16] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: Learning Affordance for
Direct Perception in Autonomous Driving,” in Proceedings of the IEEE/CVF International
Conference on Computer Vision, 2015, pp. 2722–2730.
[17] Zeng, Y., Park, W., Mao, Z. M., & Jia, R. “Rethinking the backdoor attacks’ triggers: A
frequency perspective.” Proceedings of the IEEE/CVF International Conference on Computer
Vision, 2021, pp. 16473-16481.
[18] Li, Yiming, et al. “Backdoor learning: A survey.” IEEE Transactions on Neural Networks
and Learning Systems, 2022.
[19] Gu, Tianyu, Dolan-Gavitt, Brendan, and Garg, Siddharth. “Badnets: Identifying vulnerabilities
in the machine learning model supply chain.” arXiv preprint arXiv:1708.06733,
2017.
[20] Tran, Brandon, Li, Jerry, and Madry, Aleksander. “Spectral signatures in backdoor attacks.”
Advances in Neural Information Processing Systems, 31, 2018.
[21] Jiang, Wenbo, et al. “Color backdoor: A robust poisoning attack in color space.” Proceedings
of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
[22] Liu, Yunfei, et al. “Reflection backdoor: A natural backdoor attack on deep neural networks.”
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August
23–28, 2020, Proceedings, Part X, Vol. 16. Springer International Publishing, 2020.
[23] Han, X., Xu, G., Zhou, Y., Yang, X., Li, J., & Zhang, T. “Physical backdoor attacks to lane
detection systems in autonomous driving.” In Proceedings of the 30th ACM International
Conference on Multimedia, pp. 2957-2968, October 2022.
[24] Turner, A., Tsipras, D., & Madry, A. “Label-consistent backdoor attacks.” arXiv preprint
arXiv:1912.02771, 2019.
[25] Shumailov, I., Shumaylov, Z., Kazhdan, D., Zhao, Y., Papernot, N., Erdogdu, M. A., &
Anderson, R. J. “Manipulating SGD with data ordering attacks.” Advances in Neural Information
Processing Systems, 34, pp. 18021-18032, 2021.
[26] Nguyen, A., & Tran, A. “Wanet–imperceptible warping-based backdoor attack.” arXiv
preprint arXiv:2102.10369, 2021.
[27] Liu, K., Dolan-Gavitt, B., & Garg, S. “Fine-pruning: Defending against backdooring attacks
on deep neural networks.” In International Symposium on Research in Attacks, Intrusions,
and Defenses, pp. 273-294, September 2018, Cham: Springer International Publishing.
[28] Zeng, Y., Chen, S., Park, W., Mao, Z. M., Jin, M., & Jia, R. “Adversarial unlearning of
backdoors via implicit hypergradient.” arXiv preprint arXiv:2110.03735, 2021.
[29] Gao, Y., Xu, C., Wang, D., Chen, S., Ranasinghe, D. C., & Nepal, S. “STRIP: A defence
against trojan attacks on deep neural networks.” In Proceedings of the 35th Annual Computer
Security Applications Conference, pp. 113-125, December 2019.
[30] Petsiuk, V., Das, A., & Saenko, K. “RISE: Randomized input sampling for explanation of
black-box models.” arXiv preprint arXiv:1806.07421, 2018.
[31] Li, Yige, Lyu, Xixiang, et al. “Anti-backdoor learning: Training clean models on poisoned
data.” Proceedings of the 35th Conference on Neural Information Processing Systems
(NeurIPS), 2021.
[32] Du, Min, Jia, Ruoxi, and Song, Dawn. “Robust anomaly detection and backdoor attack detection
via differential privacy.” Proceedings of the International Conference on Learning
Representations (ICLR), 2020.
[33] Guo, W., Tondi, B., & Barni, M. “An overview of backdoor attacks against deep neural
networks and possible defences.” IEEE Open Journal of Signal Processing, 3, pp. 261-287,
2022.
[34] Turner, A., Tsipras, D., & Madry, A. “Label-consistent backdoor attacks.” arXiv preprint
arXiv:1912.02771, 2019.
[35] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. “Imagenet: A large-scale
hierarchical image database.” In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), pp. 248-255, 2009.
35 |