參考文獻 |
[1] I. Aizenberg, N. Aizenberg, C. Butakov and E. Farberov, “Image recognition on the neural network based on multi-valued neurons,” Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, 2000, pp. 989-992 vol.2.
[2] W. S. McCulloch and W. Pitts, “A Logical Calculus of the Ideas Imminent in Nervous Activity,” Bulletin of Mathematical Biophysics, vol. 5, pp. 115-133, 1943.
[3] K. Fukushima, “Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position,” Biological Cybernetics, vol. 36, pp. 193-202, 1980.
[4] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324.
[5] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012.
[6] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[7] Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. Technical report, arXiv preprint arXiv:1409.4842, 2014a.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[9] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” CoRR, abs/1409.0473, 2014.
[10] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, etal., “Attention is all you need,” CoRR, vol. abs/1706.03762, 2017.
[11] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta learning for fast adaptation of deep networks. In International Conference on Machine Learning, 2017.
[12] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077–4087, 2017.
[13] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In CVPR, 2018.
[14] Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, and Tom Goldstein. Unraveling metalearning: Understanding feature representations for few-shot tasks. In ICML, 2020.
[15] Jinlu Liu, Liang Song, and Yongqiang Qin. Prototype rectification for few-shot learning. In ECCV, 2020.
[16] Y. Chen, Z. Liu, H. Xu, T. Darrell and X. Wang, "Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning," 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9042-9051
[17] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
[18] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. ICLR, 2016.
[19] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 2018.
[20] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
[21] W. Liu, A. Rabinovich, and A. C. Berg. Parsenet: Looking wider to see better. arXiv, 2015.
[22] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
[23] A. Shaban, S. Bansal, Z. Liu, I. Essa, and B. Boots. One-shot learning for semantic segmentation. In BMVC, 2017.
[24] N. Dong and E. P. Xing. Few-shot semantic segmentation with prototype learning. In BMVC, 2018.
[25] K. Wang, J. Liew, Y. Zou, D. Zhou, and J. Feng. Panet: Few-shot image semantic segmentation with prototype alignment. In ICCV, 2019.
[26] C. Zhang, G. Lin, F. Liu, R. Yao, and C. Shen. Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In CVPR, 2019.
[27] G. Lin, A. Milan, C. Shen, and I. D. Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In CVPR, 2017.
[28] Z. Lu, S. He, X. Zhu, L. Zhang, Y. -Z. Song and T. Xiang, "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer," 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 8721-8730
[29] Zhuotao Tian, Hengshuang Zhao, Michelle Shu, Zhicheng Yang, Ruiyu Li, and Jiaya Jia. Prior guided feature enrichment network for few-shot segmentation. TPAMI, 2020.
[30] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.
[31] Khoi Nguyen and Sinisa Todorovic. Feature weighting and boosting for few-shot segmentation. In ICCV, 2019.
[32] Xiaoliu Luo, Zhuotao Tian, Taiping Zhang, Bei Yu, Yuan Yan Tang, and Jiaya Jia. Pfenet++: Boosting few-shot semantic segmentation with the noise-filtered context-aware prior mask. arXiv preprint arXiv:2109.13788, 2021. |