參考文獻 |
[1] A. Géron, Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems. O’Reilly Media, 2nd ed., 2019.
[2] T. Li, G. Convertino, W. Wang, H. Most, T. Zajonc, and Y.-H. Tsai, “Hyper- tuner: Visual analytics for hyperparameter tuning by professionals,” in Proc. Ma- chine Learning from User Interaction for Visualization and Analytics Workshop at IEEE VIS, 2018.
[3] G. I. Diaz, A. Fokoue-Nkoutche, G. Nannicini, and H. Samulowitz, “An effective algorithm for hyperparameter optimization of neural networks,” IBM Journal of Research and Development, vol. 61, no. 4/5, pp. 9–1, 2017.
[4] J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization.,” Journal of machine learning research, vol. 13, no. 2, 2012.
[5] D.E.GoldbergandJ.H.Holland,“Geneticalgorithmsandmachinelearning,”1988.
[6] D. Jönsson, G. Eilertsen, H. Shi, J. Zheng, A. Ynnerman, and J. Unger, “Visual analysis of the impact of neural network hyper-parameters,” 2020.
[7] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of ma- chine learning algorithms,” arXiv preprint arXiv:1206.2944, 2012.
[8] E. Brochu, V. M. Cora, and N. De Freitas, “A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning,” arXiv preprint arXiv:1012.2599, 2010.
[9] L. R. Rere, M. I. Fanany, and A. M. Arymurthy, “Simulated annealing algorithm for deep learning,” Procedia Computer Science, vol. 72, pp. 137–144, 2015.
[10] S. L. Smith, P.-J. Kindermans, C. Ying, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv preprint arXiv:1711.00489, 2017.
[11] L. N. Smith, “A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay,” arXiv preprint arXiv:1803.09820, 2018.
[12] A. Varangaonkar, “What is interactive machine learning?.” hub.packtpub.com., https://hub.packtpub.com/what-is-interactive-machine-learning/, (ac- cessed: Apr. 28, 2021).
[13] S.Amershi,M.Cakmak,W.B.Knox,andT.Kulesza,“Powertothepeople:Therole of humans in interactive machine learning,” Ai Magazine, vol. 35, no. 4, pp. 105–120, 2014.
[14] B. Jiang and J. Canny, “Interactive machine learning via a gpu-accelerated toolkit,” in Proc. 22nd International Conference on Intelligent User Interfaces, pp. 535–546, 2017.
[15] G. Gharibi, V. Walunj, S. Rella, and Y. Lee, “Modelkb: towards automated manage- ment of the modeling lifecycle in deep learning,” in 2019 IEEE/ACM 7th Interna- tional Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE), pp. 28–34, IEEE, 2019.
[16] M. Vartak, H. Subramanyam, W.-E. Lee, S. Viswanathan, S. Husnoo, S. Madden, and M. Zaharia, “Modeldb: a system for machine learning model management,” in Proc. Workshop on Human-In-the-Loop Data Analytics, pp. 1–3, 2016.
[17] D. Erhan, A. Courville, Y. Bengio, and P. Vincent, “Why does unsupervised pre- training help deep learning?,” in Proc. 13th international conference on artificial intelligence and statistics, pp. 201–208, JMLR Workshop and Conference Proceed- ings, 2010.
[18] Y.Bengio,P.Simard,andP.Frasconi,“Learninglong-termdependencieswithgradi- ent descent is difficult,” IEEE transactions on neural networks, vol. 5, no. 2, pp. 157– 166, 1994.
[19] S. Amershi, J. Fogarty, A. Kapoor, and D. Tan, “Examining multiple potential models in end-user interactive concept learning,” in Proceedings of the SIGCHI Con- ference on Human Factors in Computing Systems, pp. 1357–1360, 2010.
[20] K. Patel, J. Fogarty, J. A. Landay, and B. L. Harrison, “Examining difficulties soft- ware developers encounter in the adoption of statistical machine learning.,” in AAAI, pp. 1563–1566, 2008.
[21] Scott Chacon, “Git branching - branches in a nutshell.” git-scm.com, https://git- scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell, (accessed: May 28, 2021).
[22] S. Eisler and J. Meyer, “Visual analytics and human involvement in machine learn- ing,” arXiv preprint arXiv:2005.06057, 2020.
[23] ml-tooling, “best-of-ml-python.” GitHub repository., https://github.com/ml- tooling/best-of-ml-python#machine-learning-frameworks, (accessed: Apr. 22, 2021).
[24] wiki.python.org, “Globalinterpreterlock.” wiki.python.org, https://wiki.python. org/moin/GlobalInterpreterLock, (accessed: May 7, 2021).
[25] F. Pérez and B. E. Granger, “IPython: a system for interactive scientific computing,” Computing in Science and Engineering, vol. 9, pp. 21–29, May 2007.
[26] zeromq, “zeromq/ pyzmq.” GitHub repository., https://github.com/zeromq/ pyzmq, (accessed: May 22, 2021).
[27] P. T. Inc., “Collaborative data science,” 2015.
[28] D. Bau, S. Liu, T. Wang, J.-Y. Zhu, and A. Torralba, “Rewriting a deep generative
model,” in European Conference on Computer Vision, pp. 351–369, Springer, 2020.
[29] J. A. Fails and D. R. Olsen Jr, “Interactive machine learning,” in Proc. 8th interna-
tional conference on Intelligent user interfaces, pp. 39–45, 2003.
[30] D. Guo, “Coordinating computational and visual approaches for interactive fea- ture selection and multivariate clustering,” Information Visualization, vol. 2, no. 4, pp. 232–246, 2003.
[31] ANeuralNetworkPlayground-TensorFlow,“Aneuralnetworkplayground-tensor- flow.” playground.tensorflow.org, https://playground.tensorflow.org, (accessed: Apr. 22, 2021).
[32] A. Kapoor, B. Lee, D. Tan, and E. Horvitz, “Interactive optimization for steering machine classification,” in Proc. SIGCHI Conference on Human Factors in Comput- ing Systems, pp. 1343–1352, 2010.
[33] J. Tsay, T. Mummert, N. Bobroff, A. Braz, P. Westerink, and M. Hirzel, “Run- way: machine learning model experiment management tool,” in Conf. Systems and Machine Learning (SysML), 2018.
[34] M. Zaharia, A. Chen, A. Davidson, A. Ghodsi, S. A. Hong, A. Konwinski, S. Murch- ing, T. Nykodym, P. Ogilvie, M. Parkhe, et al., “Accelerating the machine learning lifecycle with mlflow.,” IEEE Data Eng. Bull., vol. 41, no. 4, pp. 39–45, 2018.
[35] S. Schelter, J.-H. Boese, J. Kirschnick, T. Klein, and S. Seufert, “Automatically tracking metadata and provenance of machine learning experiments,” in Machine Learning Systems Workshop at NIPS, pp. 27–29, 2017. |