參考文獻 |
[1] M. Toldo, A. Maracani, U. Michieli, and P. Zanuttigh, “Unsupervised domain adaptation in semantic segmentation: a review,” Technologies, vol. 8, no. 2, pp. 35, 2020.
[2] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of statistical planning and inference, vol. 90, no. 2, pp. 227–244, 2000.
[3] M. Wang and W. Deng, “Deep Visual Domain Adaptation: A Survey,” Neurocomputing, vol. 312, pp. 135–153, Oct. 2018, doi: 10.1016/j.neucom.2018.05.083.
[4] X. Yang, C. Deng, T. Liu and D. Tao, “Heterogeneous Graph Attention Network for Unsupervised Multiple-Target Domain Adaptation,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 4, pp. 1992-2003, 2022.
[5] H. Liu, F. Liu, X. Fan, and D. Huang, “Polarized self-attention: Towards high-quality pixel-wise regression,” 2021, arXiv:2107.00782.
[6] L. Hoyer, D. Dai, H. Wang, and L. Van Gool, “MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 11721–11732.
[7] M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Deep Transfer Learning with Joint Adaptation Networks,” in Proceedings of the 34th International Conference on Machine Learning, 2017, pp. 2208–2217.
[8] B. Sun and K. Saenko, “Deep coral: Correlation alignment for deep domain adaptation,” in European conference on computer vision (ECCV), 2016, pp. 443–450.
[9] M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Unsupervised Domain Adaptation with Residual Transfer Networks,” Advances in Neural Information Processing Systems (NeurIPS), 2016, pp. 136–144.
[10] J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. Efros, and T. Darrell, “Cycada: cycle-consistent adversarial domain adaptation,” in Proceedings of the 35th International Conference on Machine Learning (ICML), 2018, pp. 1989–1998.
[11] T.-H. Vu, H. Jain, M. Bucher, M. Cord, and P. Perez, “ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2019, pp. 2517–2526.
[12] Y. Luo, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Taking a Closer Look at Domain Shift: Category-Level Adversaries for Semantics Consistent Domain Adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2507–2516.
[13] Y. Yang and S. Soatto, “Fda: Fourier domain adaptation for semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 4085–4095.
[14] K. Mei, C. Zhu, J. Zou, and S. Zhang, “Instance adaptive self-training for unsupervised domain adaptation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2020, pp. 415–430.
[15] Y. Zou, Z. Yu, B. V. K. Vijaya Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 289–305.
[16] P. Zhang, B. Zhang, T. Zhang, D. Chen, Y. Wang, and F. Wen, “Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 12414–12424.
[17] Q. ZHANG, J. Zhang, W. Liu, and D. Tao, “Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation,” Advances in Neural Information Processing Systems (NeurIPS), 2019, pp. 435–445.
[18] M. Sajjadi, M. Javanmardi, and T. Tasdizen, “Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning,” Advances in Neural Information Processing Systems (NeurIPS), 2016, pp. 1163–1171.
[19] K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. A. Raffel, E. D. Cubuk, A. Kurakin, and C.-L. Li, “Fixmatch: Simplifying semi-supervised learning with consistency and confidence,” Advances in Neural Information Processing Systems (NeurIPS), 2020, pp. 596–608.
[20] A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” Advances in Neural Information Processing Systems (NeurIPS), 2017, pp. 1195–1204.
[21] W. Tranheden, V. Olsson, J. Pinto, and L. Svensson, “Dacs: Domain adaptation via cross-domain mixed sampling,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 1379–1389.
[22] L. Hoyer, D. Dai, and L. Van Gool, “DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 9914–9925.
[23] L. Hoyer, D. Dai, and L. Van Gool, “HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2022, pp. 372–391.
[24] E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers,” Advances in Neural Information Processing Systems (NeurIPS), 2021, pp. 12077-12090.
[25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in Neural Information Processing Systems (NeurIPS), vol. 30, 2017.
[26] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local Neural Networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7794–7803.
[27] Z. Shen, M. Zhang, H. Zhao, S. Yi, and H. Li, “Efficient attention: Attention with linear complexities,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 3531-3539.
[28] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-Excitation Networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7132–7141.
[29] Y. Cao, J. Xu, S. Lin, F. Wei, and H. Hu, “GCNet: Non-Local Networks Meet Squeeze-Excitation Networks and Beyond,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2019, pp. 1971-1980.
[30] J. Fu, J. Liu, H. Tian, and Y. Li, “Dual Attention Network for Scene Segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3146-3154.
[31] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “CBAM: Convolutional Block Attention Module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19.
[32] H. Zhang, Q. Lian, J. Zhao, Y. Wang, Y. Yang, and S. Feng, “RatUNet: residual U-Net based on attention mechanism for image denoising,” PeerJ Comput. Sci., vol. 8, pp. e970, May 2022.
[33] P. Song, J. Li, and H. Fan, “Attention based multi-scale parallel network for polyp segmentation,” Comput. Biol. Med., vol. 146, pp. 105476, Jul. 2022.
[34] Z. Lv, H. Huang, W. Sun, T. Lei, J. A. Benediktsson, and J. Li, “Novel Enhanced UNet for Change Detection Using Multimodal Remote Sensing Image,” IEEE Geosci. Remote Sens. Lett., vol. 20, pp. 1–5, 2023.
[35] Q. Yu, W. Wei, Z. Pan, J. He, S. Wang, and D. Hong, “GPF-Net: Graph-polarized fusion network for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–22, 2023, Art. no. 5519622.
[36] N. Araslanov and S. Roth, “Self-supervised augmentation consistency for adapting semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15384–15394, 2021.
[37] V. Olsson, W. Tranheden, J. Pinto, and L. Svensson, “Classmix: Segmentation-based data augmentation for semi-supervised learning,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 1369–1378. 8, 10.
[38] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 801-818.
[39] S. R. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” in Proceedings of the European Conference on Computer Vision (ECCV), 2016, pp. 102–118.
[40] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, “The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3234–3243.
[41] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele, “The Cityscapes Dataset for Semantic Urban Scene Understanding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3213-3223. |