參考文獻 |
[1] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “Deepdriving: Learning affordance for direct perception in autonomous driving,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2015, pp. 2722–2730.
[2] S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. Khan, “Medical image analysis using convolutional neural networks: A review,” Journal of Medical Systems, vol. 42, pp. 1–13, 2018.
[3] Y. Li, D. Tian, M.-C. Chang, X. Bian, and S. Lyu, “Robust adversarial perturbation on deep proposal-based models,” in Proceedings of the British Machine Vision Conference, 2018, p. 231.
[4] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples for semanticsegmentation and object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2017, pp. 1369–1378.
[5] J. Li, F. Schmidt, and Z. Kolter, “Adversarial camera stickers: A physical camera-based attack on deep learning systems,” in International Conference on Machine Learning, 2019, pp. 3896–3904.
[6] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the ACM Sigsac Conference on Computer and Communications Security, 2016, pp. 1528–1540.
[7] S. Thys, W. Van Ranst, and T. Goedemé, “Fooling automated surveillance cameras: Adversarial patches to attack person detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.
[8] K. Xu, G. Zhang, S. Liu, et al., “Adversarial t-shirt! evading person detectors in a physical world,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 665–681.
[9] L. Huang, C. Gao, Y. Zhou, et al., “Universal physical camouflage attacks on object detectors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 720–729.
[10] J. Tan, N. Ji, H. Xie, and X. Xiang, “Legitimate adversarial patches: Evading human eyes and detection models in the physical world,” in Proceedings of the ACM International Conference on Multimedia, 2021, pp. 5307–5315.
[11] Y.-C.-T. Hu, B.-H. Kung, D. S. Tan, J.-C. Chen, K.-L. Hua, and W.-H. Cheng, “Naturalistic physical adversarial patch for object detectors,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7848–7857.
[12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
[13] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” in Advances in Neural Information Processing Systems, vol. 33, 2020.
[14] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in International Conference on Machine Learning, 2015, pp. 2256–2265.
[15] C. Szegedy, W. Zaremba, I. Sutskever, et al., “Intriguing properties of neural networks,” in International Conference on Learning Representations, 2014.
[16] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in Proceedings of the IEEE Symposium on Security and Privacy, 2017, pp. 39–57.
[17] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint, 2014. [Online]. Available: http://arxiv.org/abs/1412.6980.
[18] I. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in International Conference on Learning Representations, 2015.
[19] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in International Conference on Learning Representations, 2017.
[20] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in International Conference on Learning Representations, 2018.
[21] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” in International Conference on Machine Learning, 2018, pp. 284–293.
[22] T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,” in NeurIPS 2017 Workshop on Machine Learning and Computer Security, 2017.
[23] S.-T. Chen, C. Cornelius, J. Martin, and D. H. Chau, “Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2019, pp. 52–68.
[24] K. Eykholt, I. Evtimov, E. Fernandes, et al., “Robust physical-world attacks on deep learning visual classification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 1625–1634.
[25] A. Liu, X. Liu, J. Fan, et al., “Perceptual-sensitive gan for generating adversarial patches,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, pp. 1028–1035.
[26] C. Sitawarin, A. N. Bhagoji, A. Mosenia, M. Chiang, and P. Mittal, “Darts: Deceiving autonomous cars with toxic signs,” arXiv preprint, 2018. [Online]. Available: http://arxiv.org/abs/1802.06430.
[27] D. Song, K. Eykholt, I. Evtimov, et al., “Physical adversarial examples for object detectors,” in Proceedings of the USENIX Workshop on Offensive Technologies, 2018.
[28] S. Komkov and A. Petiushko, “Advhat: Real-world adversarial attack on arcface face id system,” in Proceedings of the IEEE International Conference on Pattern Recognition, 2021, pp. 819–826.
[29] M. Pautov, G. Melnikov, E. Kaziakhmedov, K. Kireev, and A. Petiushko, “On adversarial patches: Real-world attack on arcface-100 face recognition system,” in Proceedings of the International Multi-Conference on Engineering, Computer and Information Sciences, 2019, pp. 0391–0396.
[30] Z. Wu, S.-N. Lim, L. S. Davis, and T. Goldstein, “Making an invisibility cloak: Real world adversarial attacks on object detectors,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 1–17.
[31] R. Duan, X. Ma, Y. Wang, J. Bailey, A. K. Qin, and Y. Yang, “Adversarial camouflage: Hiding physical-world attacks with natural styles,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1000–1008.
[32] J. Luo, T. Bai, and J. Zhao, “Generating adversarial yet inconspicuous patches with a single image (student abstract),” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp. 15 837–15 838.
[33] D. Kingma, T. Salimans, B. Poole, and J. Ho, “Variational diffusion models,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 21 696–21 707.
[34] J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” in International Conference on Learning Representations, 2021.
[35] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 684–10 695.
[36] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2017, pp. 7263–7271.
[37] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint, 2018. [Online]. Available: http://arxiv.org/abs/1804.02767.
[38] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint, 2020. [Online]. Available: http://arxiv.org/abs/2004.10934.
[39] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems, 2015.
[40] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable DETR: deformable transformers for end-to-end object detection,” in International Conference on Learning Representations, 2021.
[41] G. Jocher, A. Chaurasia, A. Stoken, et al., ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation, version v7.0, 2022. DOI: 10 . 5281 / zenodo . 7347926. [Online].Available: https://doi.org/10.5281/zenodo.7347926.
[42] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv preprint, 2022. [Online]. Available: http://arxiv.org/abs/2207.02696.
[43] T.-Y. Lin, M. Maire, S. Belongie, et al., “Microsoft coco: Common objects in context,” in Proceeding of the European Conference on Computer Vision, 2014, pp. 740–755.
[44] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2005, pp. 886–893.
[45] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, “2d human pose estimation: New benchmark and state of the art analysis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2014, pp. 3686–3693.
[46] J. Liu, A. Levine, C. P. Lau, R. Chellappa, and S. Feizi, “Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14 973–14 982.
[47] A. Radford, J. W. Kim, C. Hallacy, et al., “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning, 2021, pp. 8748–8763.
[48] J. Ho and T. Salimans, “Classifier-free diffusion guidance,” in NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021. |