參考文獻 |
[1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, 86 (11), pp. 2278-2324, 1998.
[2] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS) - Volume 1, pp. 1097-1105, 2012.
[3] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal Policy Optimization Algorithms,” ArXiv, abs/1707.06347, 2017.
[4] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M.A. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” ArXiv, abs/1312.5602, 2013.
[5] X. Dong, L. Liu, K. Musial, and B. Gabrys, “NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 44 (7), pp. 3634-3646, 2022.
[6] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” In Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248-255, 2009.
[7] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” In Proceeding of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
[8] K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” In Proceeding of 3rd International Conference on Learning Representations (ICLR), pp. 1–14, 2015.
[9] A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” ArXiv, abs/1704.04861, 2017.
[10] M. Sandler, A.G. Howard, M. Zhu, A. Zhmoginov, and L. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” In Proceeding of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510-4520, 2018.
[11] A.G. Howard, M. Sandler, G. Chu, L. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q.V. Le, and H. Adam, “Searching for MobileNetV3,” In Proceeding of 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp.1314-1324, 2019.
[12] B. Zoph, and Q. Le, “Neural Architecture Search with Reinforcement Learning,” In Proceedings of International Conference on Learning Representations (ICLR), 2017.
[13] B. Zoph, V. Vasudevan, J. Shlens, and Q. Le, “Learning Transferable Architectures for Scalable Image Recognition,” In Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8697-8710, 2017.
[14] B. Baker, O. Gupta, N. Naik, and R. Raskar, “Designing Neural Network Architectures using Reinforcement Learning,” In Proceedings of International Conference on Learning Representations (ICLR), 2017.
[15] H.V. Hasselt, A. Guez, and D. Silver, “Deep Reinforcement Learning with Double Q-Learning,” AAAI Conference on Artificial Intelligence, 2015.
[16] Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Freitas, “Dueling Network Architectures for Deep Reinforcement Learning,” In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, pp. 1995–2003, 2016.
[17] R. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy Gradient Methods for Reinforcement Learning with Function Approximation,” In Proceedings of the 12th International Conference on Neural Information Processing Systems, pp. 1057–1063 , 1999.
[18] C. J. C. H. Watkins, and P. Dayan, “Q-learning,” Machine Learning, 8, pp. 279-292, 1992.
[19] V. Mnih, A. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous Methods for Deep Reinforcement Learning,” In Proceedings of the 33rd International Conference on Machine Learning, pp. 1928–1937, 2016.
[20] T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” In Proceedings of International Conference on Learning Representations (ICLR), 2016.
[21] T. Elsken, J. Metzen, and F. Hutter, “Neural architecture search: A survey,” The Journal of Machine Learning Research, 20 (1), pp. 1997–2017, 2019.
[22] M. Wistuba, A. Rawat, and T. Pedapati, “A Survey on Neural Architecture Search,” ArXiv, abs/1905.01392, 2019.
[23] P. Ren, Y. Xiao, X. Chang, P.y. Huang, Z. Li, X. Chen, and X. Wang. “A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions,” ACM Computing Surveys, 54 (4), pp. 1–34, 2021
[24] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.
[25] C. Ying, A. Klein, E. Christiansen, E. Real, K. Murphy, and F. Hutter, “NAS-Bench-101: Towards Reproducible Neural Architecture Search,” In Proceedings of the 36th International Conference on Machine Learning (ICML), pp. 7105–7114, 2019.
[26] X. Dong, and Y. Yang, “NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search,”. In Proceedings of International Conference on Learning Representations (ICLR), 2020.
[27] J.N. Siems, L. Zimmer, A. Zela, J. Lukasik, M. Keuper, and F. Hutter, “NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search,” ArXiv, abs/2008.09777, 2020.
[28] S. Yan, C. White, Y. Savani, and F. Hutter, “NAS-Bench-x11 and the Power of Learning Curves,” In Proceedings of Advances in Neural Information Processing Systems, 2021.
[29] Y. Mehta, C. White, A. Zela, A. Krishnakumar, G. Zabergja, S. Moradian, M. Safari, K. Yu, and F. Hutter, “NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy,” In Proceedings of International Conference on Learning Representations (ICLR), 2022.
[30] M. Tan, B. Chen, R. Pang, V. Vasudevan, and Q.V. Le, “MnasNet: Platform-Aware Neural Architecture Search for Mobile,” In Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2815-2823, 2019.
[31] M. Tan, and Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” In Proceedings of the 36th International Conference on Machine Learning (ICML), pp. 6105-6114, 2019.
[32] M. Guo, Z. Zhong, W. Wu, D. Lin and J. Yan, “IRLAS: Inverse Reinforcement Learning for Architecture Search,” In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[33] M. Suganuma, S. Shirakawa, and T. Nagao, “A Genetic Programming Approach to Designing Convolutional Neural Network Architectures,” In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 497–504, 2017
[34] L. Xie, and A. Yuille, “Genetic CNN,” In Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1388-1397, 2017.
[35] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le, “Regularized Evolution for Image Classifier Architecture Search”, In Proceedings of the AAAI Conference on Artificial Intelligence, 33 (01), pp. 4780-4789, 2019.
[36] H. Liu, K. Simonyan, and Y. Yang, “DARTS: Differentiable Architecture Search,” In Proceedings of International Conference on Learning Representations (ICLR), 2019.
[37] J. Fang, Y. Sun, Q. Zhang, Y. Li, W. Liu, and X. Wang, “Densely Connected Search Space for More Flexible Neural Architecture Search,” In Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10625-10634, 2020.
[38] G. Bender, P. Kindermans, B. Zoph, V. Vasudevan, and Q.V. Le, “Understanding and Simplifying One-Shot Architecture Search,” In Proceedings of International Conference on Machine Learning (ICML), 2018.
[39] A. Krizhevsky, and G. Hinton, “Learning multiple layers of features from tiny images,” Citeseer, Tech. Rep., 2009. |