參考文獻 |
[1] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2015, vol. 07-12-June, pp. 3431–3440, doi: 10.1109/CVPR.2015.7298965.
[2] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015, 2015, pp. 234–241.
[3] O. Oktay et al., “Attention U-Net: Learning Where to Look for the Pancreas,” 2018, [Online]. Available: http://arxiv.org/abs/1804.03999.
[4] M. Cordts et al., “The Cityscapes Dataset for Semantic Urban Scene Understanding,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem, pp. 3213–3223, doi: 10.1109/CVPR.2016.350.
[5] Y. Li, K. He, J. Sun, and others, “R-fcn: Object detection via region-based fully convolutional networks,” Adv. Neural Inf. Process. Syst., no. Nips, pp. 379–387, 2016, [Online]. Available: http://papers.nips.cc/paper/6465-r-fcn-object-detection-via-region-based-fully-convolutional-networks.pdf.
[6] L. Wang, W. Ouyang, X. Wang, and H. Lu, “Visual tracking with fully convolutional networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, vol. 2015 Inter, pp. 3119–3127, doi: 10.1109/ICCV.2015.357.
[7] C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang, “BiSeNet: Bilateral segmentation network for real-time semantic segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018, vol. 11217 LNCS, pp. 334–349, doi: 10.1007/978-3-030-01261-8_20.
[8] Y. Hu et al., “Fully Automatic Pediatric Echocardiography Segmentation Using Deep Convolutional Networks Based on BiSeNet,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, 2019, pp. 6561–6564, doi: 10.1109/EMBC.2019.8856457.
[9] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 386–397, 2020, doi: 10.1109/TPAMI.2018.2844175.
[10] L. Chen, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp. 834--848, 2015.
[11] M. T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” in Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing, 2015, pp. 1412–1421, doi: 10.18653/v1/d15-1166.
[12] W. Wang and J. Shen, “Deep Visual Attention Prediction,” IEEE Trans. Image Process., vol. 27, no. 5, pp. 2368–2378, 2018, doi: 10.1109/TIP.2017.2787612.
[13] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2015.
[14] F. Wang et al., “Residual attention network for image classification,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, vol. 2017-Janua, pp. 6450–6458, doi: 10.1109/CVPR.2017.683.
[15] Z. Shi, C. Chen, Z. Xiong, D. Liu, Z. J. Zha, and F. Wu, “Deep residual attention network for spectral image super-resolution,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, vol. 11133 LNCS, pp. 214–229, doi: 10.1007/978-3-030-11021-5_14.
[16] J.-H. Kim, J.-H. Choi, M. Cheon, and J.-S. Lee, “RAM: Residual Attention Module for Single Image Super-Resolution,” arXiv Prepr., 2018, doi: arXiv:1811.12043v1.
[17] T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal Loss for Dense Object Detection,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, vol. 2017-Octob, pp. 2999–3007, doi: 10.1109/ICCV.2017.324.
[18] N. Abraham and N. M. Khan, “A novel focal tversky loss function with improved attention u-net for lesion segmentation,” in Proceedings - International Symposium on Biomedical Imaging, 2019, vol. 2019-April, pp. 683–687, doi: 10.1109/ISBI.2019.8759329.
[19] H. P. Ng, S. Huang, S. H. Ong, K. W. C. Foong, P. S. Goh, and W. L. Nowinski, “Medical image segmentation using watershed segmentation with texture-based region merging,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS’08 - “Personalized Healthcare through Technology,” 2008, pp. 4039–4042, doi: 10.1109/iembs.2008.4650096.
[20] Y. Zhao, J. Liu, H. Li, and G. Li, “Improved watershed algorithm for dowels image segmentation,” in Proceedings of the World Congress on Intelligent Control and Automation (WCICA), 2008, pp. 7640–7643, doi: 10.1109/WCICA.2008.4594115.
[21] M. Sikander Hayat Khiyal, A. Khan, and A. Bibi, “Modified Watershed Algorithm for Segmentation of 2D Images,” Issues Informing Sci. Inf. Technol., vol. 6, pp. 877–886, 2009, doi: 10.28945/1077.
[22] M. Bai and R. Urtasun, “Deep watershed transform for instance segmentation,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, vol. 2017-Janua, pp. 2858–2866, doi: 10.1109/CVPR.2017.305.
[23] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” 2016.
[24] Y. Wei, H. Xiao, H. Shi, Z. Jie, J. Feng, and T. S. Huang, “Revisiting Dilated Convolution: A Simple Approach for Weakly- and Semi-Supervised Semantic Segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 1, pp. 7268–7277, 2018, doi: 10.1109/CVPR.2018.00759.
[25] “UdaCity dataset.” https://github.com/udacity/self-driving-car/. |