參考文獻 |
[ 1 ] Y. Ganin and V. S. Lempitsky. “Unsupervised domain adaptation by backpropagation.” In International Conference on Machine Learning, 2015.
[ 2 ] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. “Deep domain confusion: Maximizing for domain invariance.” arXiv preprint arXiv:1412.3474, 2014.
[ 3 ] M. Long, H. Zhu, J. Wang, and M. I. Jordan. “Unsupervised domain adaptation with residual transfer networks.” In Neural Information Processing Systems, 2016.
[ 4 ] Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. “Collaborative and adversarial network for unsupervised domain adaptation.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages. 3801–3809, 2018.
[ 5 ] Z. Pei, Z. Cao, M. Long, and J. Wang, “Multi-adversarial domain adaptation,” in Proc. 32nd AAAI Conf. Artif. Intell., New Orleans, LA, USA, 2018.
[ 6 ] F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain,” Psych. Rev., vol. 65, pages. 386–408, 1958.
[ 7 ] D. Rumelhart, G. Hinton, and R. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pages. 533–536, 1986.
[ 8 ] T. Tieleman, “Training Restricted Boltzmann Machines Using Approximations to the Likelihood Gradient,” Proc. Int’l Conf. Machine Learning, pages. 1064-1071, 2008.
[ 9 ] G. Hinton, S. Osindero, Y. Teh,“A Fast Learning Algorithm for Deep Belief Nets”Neural computation, Vol. 18, No. 7, pages. 1527-1554, 2006.
[ 10 ] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner,“Gradient-based learning applied todocument recognition,”Proceedings of the IEEE, vol. 86, no. 11, pages. 2278-2324, Nov 1998.
[ 11 ] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Neural Information Processing Systems, 2012.
[ 12 ] Deng, J., et al. “ImageNet: A large-scale hierarchical image database” . IEEE Conference on Computer Vision and Pattern Recognition, 2009.
[ 13 ] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. “Going deeper with convolutions”, In Computer Vision and Pattern Recognition, 2015.
[ 14 ] S. Ioffe and C. Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift”, In International Conference on Machine Learning, 2015.
[ 15 ] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.” Rethinking the Inception Architecture for Computer Vision”, ArXiv e-prints, December 2015.
[ 16 ] C. Szegedy, S. Ioffe, and V. Vanhoucke. “Inception-v4, inception-resnet and the impact of residual connections on learning”. In International Conference on Learning Representations Workshop, 2016.
[ 17 ] K. He, X. Zhang, S. Ren, and J. Sun. “Deep residual learning for image recognition”, In Computer Vision and Pattern Recognition, 2016
[ 18 ] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, ‘‘A survey on deep transfer learning,’’ in Proc. ICANN, Rhodes, Greece, 2018, pages. 270–279.
[ 19 ] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. “How transferable are features in deep neural networks?” In Advances in Neural Information Processing Systems, pages. 3320–3328, 2014.
[ 20 ] Transfer Learning - Machine Learning′s Next Frontier .http://ruder.io/transfer-learning/
[ 21 ] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative adversarial nets”, In Neural Information Processing Systems, pages. 2672–2680, 2014.
[ 22 ] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. “Deep domain confusion: Maximizing for domain invariance”. CoRR, abs/1412.3474, 2014.
[ 23 ] M. Long, Y. Cao, J. Wang, and M. I. Jordan.” Learning transferable features with deep adaptation networks”, In Proceedings of the 32nd International Conference on Machine Learning, pages. 97–105, 2015.
[ 24 ] M. Long, J. Wang, and M. I. Jordan.” Unsupervised domain adaptation with residual transfer networks”, CoRR, abs/1602.04433, 2016.
[ 25 ] A. Krizhevsky, I. Sutskever, and G. Hinton. “Imagenet classification with deep convolutional neural networks”, In Neural Information Processing Systems, 2012.
[ 26 ] C. Zimmermann and T. Brox. “Learning to estimate 3d hand pose from single rgb images”. In International Conference on Computer Vision , 2017.
[ 27 ] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. “Attention is all you need”. In Neural Information Processing Systems, 2017.
[ 28 ] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size”. arXiv preprint arXiv:1602.07360, 2016
[ 29 ] O’Reilly, R. C. “Biologically plausible error-driven learning using local activation differences: the generalized recirculation algorithm”, Neural Comput. 8, 895–938 (1996).
[ 30 ] H. Shimodaira. “Improving predictive inference under covariate shift by weighting the log-likelihood function”, Journal of statistical planning and inference, Vol. 90, No. 2, pages. 227–244, 2000.
[ 31 ] D. Dai and L. Van Gool. “Dark model adaptation: Semantic image segmentation from daytime to nighttime”, arXiv preprint arXiv:1810.02575, 2018
[ 32 ] Zhedong Zheng, Xiaodong Yang, Zhiding Yu, Liang Zheng, Yi Yang, and Jan Kautz. “Joint discriminative and generative learning for person re-identification”, Computer Vision and Pattern Recognition, 2019.
[ 33 ] S. Woo, J. Park, J.-Y. Lee, and I. So Kweon. Cbam.” Convolutional block attention module”, In Proceedings of the European Conference on Computer Vision, pages 3–19, 2018
[ 34 ] X. Wang, R. Girshick, A. Gupta, and K. He. “Non-local neural networks”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7794– 7803, 2018.
[ 35 ] J. Hu, L. Shen, and G. Sun.”Squeeze-and-excitation networks”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[ 36 ] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. “Adapting visual category models to new domains”, In European Conference on Computer Vision, pages 213– 226, 2010.
[ 37 ] He, Kaiming, and Jian Sun. “Convolutional neural networks at constrained time cost”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
[ 38 ] Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio, et al. “Tacotron: Towards end-to-end speech synthesis”, In Interspeech, 2017.
[ 39 ] A. Gretton, K. Borgwardt, M. Rasch, B. Schölkopf, and A. Smola. “A kernel two-sample test”, Journal of Machine Learning Research,Vol. 13, pages 723–773, March 2012.
[ 40 ] Y. Grandvalet and Y. Bengio. “Semi-supervised learning by entropy minimization”, In Neural Information Processing Systems, 2004.
[ 41 ] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf. ” A deep convolutional activation feature for generic visual recognition”, In International Conference on Machine Learning, 2014.
|